Old News is proud to release the most complete and well sourced WWDC 2017 write up available. I have posted it below as an eBook and it should be showing up on the iBook store shortly, all available for free!
Apple is holding it’s annual World Wide Developers Conference (WWDC) where they effectively shut down operations on campus for a week in order to spend the necessary time showing the world what they’ve been working on. The week starts with a keynote, a platforms state of the union, and then splits off into a variety of workshops which run all week. These sessions allow Apple to share valuable insights with the developers who make the apps we come to love and rely upon.
I’ve written this tomb to be a handy one stop shop for everything that happened at the conference. Appendices A and B go through all of the software and hardware features announced. I’ve also compiled the most prominent and influential reviews along with videos of every major product revision in the past 15 years as Appendix C. Further read, history, guides and explanations fill out the remainder of the index.
+ History: A Stroll Through the Orchard
WWDC is always a huge deal for Apple, and due to timing Apple is frequently seen with it’s back against the wall and needing to respond. History has shown us Apple responds well, and some of my favorite moments are when the company has seemed to drop the veneer and address these issues head on; like when Phil used some colorful language to announce his newest creation; or this commercial which I always felt played as a love note to their employees; or Phil again when he recognized Marco publicly for his poignant-if-emotional critique of the state of Apple software stability in 2015. Like clockwork, each year Apple is portrayed on the defensive due to some combination of the event coming after Microsoft’s Build and Google’s I/O conferences while often being Apple’s first major public event since the previous fall and the bizarre yet prevailing narrative that Apple is perpetually a company on the brink of irrelevancy. Despite this, Apple tends to walk away from the event looking as strong as ever, gently putting to rest the mounting concerns which have grown in the intervening 4-8 months of silence. Some years this concern is more palpable than others, however, and I am always fascinated by how Apple responds. Its no coincidence that two of the three memories I shared above came from the same keynote in 2013. The company was in a strange place, it’s founder and now legendary visionary in Steve Jobs had passed and the new leadership was just getting an opportunity to define the next era. Even Apple supporters could be concerned, though few were, that maybe the company had struck gold with the iPhone and was able to extend that to a bigger screen device with the iPad, but wouldn’t be able to create something new and equally compelling, particularly sans Steve. The narrative anticipated Google was improving at design faster than Apple was improving at web services. While the argument is largely subjective, I feel it’d be hard to argue that’s how the last 4 years have played out as Apple Services are pulling in $7 billion in a quarter now and would be a Fortune 100 company in-it-of-itself (#100 is Capital One Financial at $27.5 billion at the time of writing), with stated plans to double by 2020. The Pixel is a decent-enough looking knockoff of an iconic and 3 year-old design, and Android’s design aesthetic is cleaning up, but Apple Music is the 2nd largest music streaming service with 27 million paid subscriptions; iCloud adds another 125 million; Apple Pay is the most prevalent contactless payment solution in the US; Maps is far more popular than Google Maps when offered; and iMessage is so prevalent it’s value is difficult to quantify. You could go on and on and argue that iWork provides a better collaborating experience than Google Docs, even for non-Apple users; or that iCloud Drive is a more elegant solution for storage than Google Drive with features like storage optimization, but I think the point is clear, Apple responded and it’s no longer popular to say Apple can’t do services. The 2015 concern around software reliability was a bit more nebulous but also a tad more alarming as it was coming from inside the notoriously loyal developer community which famously held the company together in the dark days. Apple’s own internal developers seemed a bit surprised by these allegations, as apparently the statistics they had showed the software was more reliable than ever. Apple’s response has been two-fold, one by becoming more transparent and verbal with the developer community; and two, in theory, by releasing more stable software. Its hard to quantify how valid the concern was to begin with, or how much its been addressed, either in terms of actual performance or in perception. One of the primary issues was that the complaints weren’t necessarily that things were failing, catastrophic failures such as apps crashing or devices forced into shutdown were almost certainly just as functionally nonexistent as before and likely significantly rarer. Yet we depend on our devices in a whole different way these days, and so little things like a note not syncing for a couple minutes; Continuity not showing up; an iMessage notification only showing on three devices but not the other two; or an TV displaying 5 times on the same wireless network are all things which are very difficult to quantify, even as the user, yet immediately feel a little gross and the magic of the “it just works” experience is lost. Beyond how difficult it would be to quantify these performance issues in the world, perception is even more important, and somehow far more difficult to measure or directly impact. Perception can be influenced simply because you know people are complaining about an issue, even if you’ve never experienced it yourself, and now in the social network era the potential for an individual to be aware of someone, somewhere having an issue with an Apple Service is drastically higher than ever before. Despite all these challenges, it appears that the community has largely felt this issue has been addressed. Apple certainly is putting more resources towards addressing potential concerns with things like the Apple Support Twitter account and the new app for their devices, and we’ve seen the immediate impact on the customer experience with things like the recent MacBook Pro battery life issues at launch. You don’t see many thought pieces about the beleaguered state of Apple software these days, and anecdotally my Twitter feed is filled with far fewer people complaining about and posting screenshots of weird glitches. So while it seems likely that Apple is well on its way to regaining the magic of “it just works”, there was an additional side concern in 2015 that was starting to loom over the entire company in 2016. When Apple released the iPad Pro, Tim Cook frequently pitched it as “the closest vision at Apple of the future of computing”. Some people read into this and the relatively unpolished software and began to question if Apple was starting to lose its edge when it comes to appealing to the “pro” users. Well, for those who started to grow concerned from that statement, the following 18 months sure did a lot to fan those flames. Apple was forced to delay shipping updates to their new MacBook Pro until the very end of last year. When it did ship, the machine that was offered was aesthetically gorgeous and seemingly light as an Air, but also seemed to be light on specs for those “pro” users, sticking with underpowered mobile graphics processing even in the discrete configurations, and offering no more or less than 16GB of RAM. While Apple was stoked on the technological marvel that is the TouchBar, it doesn’t make for a great stage demo, and the “pro” user’s other concerns had them too hot to be bothered with this new fangled user interface paradigm. The machine does ship with some amazingly fast onboard storage, but it comes with a healthy dose of soldering preventing any form of upgradability. It features the most exceptional set of I/O options on a mobile device, but no one had USB-C or Thunderbolt 3 devices yet, so in some ways the new MacBook Pro amounted to the world’s best dongle hub. Even the nomenclature was cumbersome, MacBook Pro with Touch Bar Thunderbolt x4 doesn’t exactly roll off the tongue in a way typical Apple names, though sometimes silly, mostly do. In so many ways, Apple had released the MacBook Pro Compromised Edition. Meanwhile, for whatever reason, the iMac didn’t receive it’s annual speed bump in the fall. The Mac Mini was last updated in October of 2014, and in what would-be-comical-if-it-wasn’t-so-painful the Mac Pro, released with so much fanfare, was left chilling since December of 2013 without a single change in its available configurations from day 1, its webpage likely accumulating dust (who could be bothered to check?). When Microsoft announced the new Microsoft Surface Studio all-in-one this fall while Apple’s entire desktop lineup sat with an average time since last update of 3 years, its easy to see why frustration was mounting. Marco Arment started retweeting long time users talking about leaving the Mac and John Siracusa, with his infamous Mac Pro, rumored to be manufactured before the rise of mammals, was speccing out hackintoshes, clearly the greater “pro” Apple community was getting desperate. The situation got so untenable that Apple broke down and played a card everyone had assumed didn’t ship in the official deck, they laid out a roadmap. In what would have played for a good April Fools joke, Apple invited a few of the most respected tech journalists to waltz into Jony Ive’s design studio where Apple executives informed them that a true, newly re-designed Mac Pro was in the works but wouldn’t be able to ship in 2017. In the meantime, there would be a bit of a price drop\spec bump for the current Mac Pros, but due the extreme thermal limits the current design pushed, they had simply been unable to provide more meaningful updates. “Oh”, they effectively added, “and the iMac will get some new pro configurations”, whatever that means. Yes, Apple invited a bunch of bloggers to sit in one of the most guarded rooms on the planet to talk about unannounced hardware and admit they “innovate(d) [their] ass” into “a bit thermal corner”. On the one hand, good for Apple for filling us all in, on the other hand, WTF is going on over there? I’m fairly sure The Accidental Tech Podcast had their most popular episode of all time trying to sort out what on earth to make of this mess. As I alluded to with the Microsoft Surface Studio, while Apple was doing something in private, the rest of the world was clearly moving forward. Siri, once the pre-eminent name for a digital personal assistant, had quickly been supplanted in the lexicon by Alexa. The world was moving forward with machine learning, best defined by Rene Ritchie as Tinder for neural networks, which is powered by graphics cards. Apple’s Mac hardware was not only 3 years old on average, but already offered middling graphics performance at launch. Meanwhile, Google announces they have secretly been developing customized silicon themselves, called TPUs, which have been allowing them perform these modern distributed network tasks at far greater efficiency. So while Siri was stuck fumbling around, these competitors and their fancy learning machines were leaving Apple behind. Beyond that threat, a mere app, Snapchat, appeared to be taking over the world using their prized goods, the iPhone camera. Their lenses were introducing the world to how augmented reality (AR) could be entertaining, and Snapchat seemed poised to take over the world when their IPO launched earlier this year, only to have their thunder dramatically stolen by Facebook who announced all the same stuff to a vastly larger audience a few weeks later (truly a great read by a great mind, Ben Thompson never disappoints). With Occulous Rift, the HTC Vive, Snap’s Spectacles, and the HoloLens, everyone was getting into virtualized and mixed realities. As those who know me would attest, I have been a pretty avid Apple defender since the days of OS X 10.4 Tiger. Through the issues I’ve mentioned above and the many, many others I can remember (this is basically an annual cycle) I have pretty much always seen through the smoke, and remained excited at the potential this company provides for the planet. While that was absolutely true Sunday evening, I must admit that this was far and away the least excited I’ve been for a keynote in a decade. What all could Apple really announce? The rumors were flying that all sorts of hardware would get updated, but that seemed unlikely to ever happen at a developers conference. Still, with so much smoke, there’s likely fire, but all that smoke wasn’t giving any indication what would be exciting about the hardware. Not a word could be found about the future for any of the software platforms, and enticing demos put together by amazing people like Federico Viticci, while awesome, in some ways only served to exacerbate the frustrations. While everyone else is partying with AR, all of these efforts are nice tech demos which all come with huge caveats and require ungainly hardware. Clearly Apple wasn’t going to be dipping it’s toes in the waters until it had something far more polished to show. The big dark horse going into the show was the possibility of a Siri assistant to rival Alexa and Google Home. Cool, I guess. There’s plenty of articles written about how these things are the future, and have a couple friends who do swear by them, but the privacy concerns are really a nonstarter for me. Also, the big question mark for Apple entering the space is the quality of Siri, and no amount of dogs or ponies in a stage demo were going to convince anyone of anything about Siri after her extended beta life existence. While I personally wasn’t too concerned about the long term health of the company, I think many people found themselves uncharacteristically bearish as this article by the wonderful M.G. Siegler does a terrific job of encapsulating, and I wasn’t particularly confident this conference was going to do much to dispel those concerns.
+ Setting the Stage
The Apple of 2017 is a very different beast than it has ever been before as it settles into being consistently the world’s most valuable company. With teams focusing on all sorts of areas you wouldn’t immediately associate with the company, there could never be a better time to be settling into a new office. The campus, with it’s giant circular structure surrounding a forest, seems to intrinsically evoke concepts of harmony and cooperation. While likely not intended to ever host a WWDC, it certainly wasn’t ready to yet. Apple decided to take the opportunity to return to San Jose, which is much closer to Cupertino, as well as far more relaxed than San Francisco where the company had been hosting the event since the start of the iPhone era. I did not attend WWDC, nor have I ever in the past, but even from thousands of miles away, the difference in the atmosphere of the conference was immediately obvious as long time WWDC attendee Jim Dalrymple writes. Even the tweets coming from the Apple blogosphere seemed more laid back. A company known for precision and a CEO even more so, the Tim Cook era Apple has always started each keynote and conference within 120 seconds of 10 am Pacific time. Yet this time was different, not only starting late, but with a cheeky short. Apple tested the waters with one of these previously, but shared it off stage. About that silly opening video, maybe its just me, but I feel like Apple is getting increasingly good at producing internal content. Its subtle, but comparing it to the videos of years past seems to show some real growth. If you watch carefully, there are a lot of elaborate sets involved in this short video. The production caliber at Apple has always been top notch, but so much about this short felt like a big budget feature film and not just an ad for a premium tech company. I’ll expand more on these thoughts later, but after 2.5k+ words, I really need to get to the actual keynote, which was pretty jammed full of content. Companies always tell you they have a lot to say, and Apple is certainly no exception. Monday’s keynote, however, basically redefined the concept of a tech presentation. Apple seems to really like hitting this 140 minute mark as a limit which is certainly well short of what some other companies have done, but what is truly wild is how every single part of the presentation felt well paced. Sure, everything was actually moving at an extraordinary clip, and tons of incredibly valuable information was glossed over, but in contrast to the 2013 keynote which was similarly dense but sometimes felt a bit rushed, everything flowed smoothly. Previously, when the company has done longer keynotes, the end has been reserved for an extended music session or live performance, most of which have been awkward and plodding at best, and downright painful and meme worthy at worst. This one had none of those issues. I’ve made an outline of the keynote and included it as Appendix A and B, have you ever seen an outline that long? I sure as hell haven’t!
+ The Keynote: And Now for Our Feature Presentation
So lets get to it. Right off the bat Apple stated they had 6 major topics to discuss and the first was tvOS. In a move that everyone suspected was inevitable, Amazon has decided to bring their Prime Video service to the platform. Despite this having absolutely nothing to do with Apple’s work, and rampant speculation Apple will eventually release a competing service, Apple shared the good news extolling the virtues of Amazon’s service, and then let us know they had much more to say another day, and then got right on to topic number 2, watchOS. Wait, what? That’s the whole topic? In a move that was anything but inconspicuous, Apple clearly showed the world they think of tvOS as one of their four platforms and that they have absolutely nothing ready to talk about for it. For those keeping count, tvOS 10 was a pretty minor set of enhancements and the flagship feature, Single Sign-on, didn’t actually ship with the update and still is only supported by a handful of mostly obscure providers. For a platform the company keeps saying they care passionately about, how embarrassing. Yet somehow, in it’s literal sixty five seconds of fame, Tim couldn’t have looked more relaxed and comfortable about the subject. Its very possible I’m reading too much into this, but this seemed like the most time efficient way to make it clear that Apple is working hard on the TV and that future WWDCs will have much more attention devoted to the platform. Outside of the keynote Apple has shared refinements like asking Siri to connect to AirPods with the TV remote, never needing to use a visual interface to pair them; auto day/night theme switching based on local time; home screens can be synced between TVs; and notifications are now possible. So cool things are being added to the platform immediately, as long as they don’t need a user interface… hmmm.
+ watchOS: Getting to the Wrist
By all accounts, the primary engineers were pretty nervous preceding the launch of the Watch about battery life. News stories kept running discussing the number of hours of use you could get out of the then imaginary device, fretting that you’d be charging your watch several times a day. But, as plenty of people would point out, this didn’t make a lot of sense. Why would you spend a whole bunch of effort building up muscles so you could actively interact with a tiny screen on your wrist instead of the bigger screen in your pocket? In too many ways, the original incarnation of the watch tried to be a miniature iPhone. The OS was designed to be as flexible of a platform as possible, but with severe power constraints. When the iPhone originally launched it also was highly power constrained, and Apple had success preventing the situation from getting out of control by not allowing third party apps to run at all in the background until several generations of hardware and software refinements made it possible, and even then, cautiously and judiciously. The iPhone, and likely every computer Apple had made before it, is an active device. You go to it with some specific goal or at the very least with the intention of the device holding your attention in some fashion, and the concern is the device will become a distraction. It turns out that Watch is actually the answer to that concern, and thus the user experience is entirely different from the ground up. You rarely actively use it, and if you do its for something immediate such as setting a timer, starting a workout, playing a song, and, in the most advanced case, controlling some other device\service in your life with Siri or other remote functionality. The whole point of the device is that it allows you to keep your attention elsewhere. Any notification is suddenly as glanceable as the time, and you can turn off your lights discreetly from your wrist rather than clapping noisily or getting up and walking to a switch like a barbarian. Despite it’s numeral, watchOS 3 was announced shortly after the one year anniversary of people owning the device. Though slightly concerning it wasn’t fully appreciated prior to launch, Apple had come to terms with the passive nature of the product category and redesigned the OS accordingly. Oddly enough, you conserve power on a passive device by having it be more proactive. On the iPhone the better choice was to have the user wait a few seconds for the app to update when the user opened it to ensure that app hasn’t been sipping\gulping precious power all day for little gain. On a watch, those seconds completely define the user experience. For the most part, the watch is best used as a notifier, and the only third party support necessary is to surface the same notifications as the iPhone lock screen. That part is easy from a design perspective. The other half of the watch experience is having access to control the devices and services you use daily immediately without distraction. What’s tricky is that while there are likely dozens to hundreds of possible experiences people wish to make immediately available from their wrist, for each person, there’s only a handful of things they individually would like to control. For some, immediate access to starting a workout is a must, and for others surfacing that option would only serve to shame them as they search for a method to control their TV. Thus watchOS 3 did something which had never happened in iOS, allowed the user to prioritize which apps received CPU attention. The dedicated button (one of two on the device) previously solely for interacting with messages now allowed access to 10 apps the user would define which would be kept in memory at all times and updated as frequently as possible, going so far as to live update their content from the app drawer menu. Meanwhile the user interface was actually brought more in line with iOS. Swiping up from the bottom of the screen used to display Glances, intended to give the user important info quickly which the new app drawer does much more effectively, but in watchOS 3 it brought up Control Center just like on the iPhone and iPad. Swiping from the edge of the display now allowed you to quickly flip between various watch faces just as you could swipe between home screens. This new functionality allowed the user to setup different complications in each watch face, and so the most enterprising watch wearers used this as a method to surface the exact few services they would desire in a given activity.
+ Evolution of Wearables
By all accounts, the primary engineers were pretty nervous preceding the launch of the Watch about battery life. News stories kept running discussing the number of hours of use you could get out of the then imaginary device, fretting that you’d be charging your watch several times a day. But, as plenty of people would point out, this didn’t make a lot of sense. Why would you spend a whole bunch of effort building up muscles so you could actively interact with a tiny screen on your wrist instead of the bigger screen in your pocket?
In too many ways, the original incarnation of the watch tried to be a miniature iPhone. The OS was designed to be as flexible of a platform as possible, but with severe power constraints. When the iPhone originally launched it also was highly power constrained, and Apple had success preventing the situation from getting out of control by not allowing third party apps to run at all in the background until several generations of hardware and software refinements made it possible, and even then, cautiously and judiciously.
The iPhone, and likely every computer Apple had made before it, was an active device. You go to it with some specific goal or at the very least with the intention of the device holding your attention in some fashion, and the concern is the device will become a distraction. It turns out that Watch is actually the answer to that concern, and thus the user experience is entirely different from the ground up. You rarely actively use it, and if you do its for something immediate such as setting a timer, starting a workout, playing a song, and in the most advanced case controlling some other deviceservice in your life with Siri or other remote functionality. The whole point of the device is that it allows you to keep your attention elsewhere. Any notification is suddenly as glanceable as the time, and you can turn off your lights discreetly from your wrist rather than clapping noisily or getting up and walking to a switch like a barbarian.
Despite it’s numeral, watchOS 3 was announced shortly after the one year anniversary of people owning the device. Though slightly concerning it wasn’t fully appreciated prior to launch, Apple had come to terms with the passive nature of the product category and redesigned the OS accordingly. Oddly enough, you conserve power on a passive device by having it be more proactive. On the iPhone the better choice was to have the user wait a few seconds for the app to update when the user opened it to ensure that app hasn’t been sipping/gulping precious power all day for little gain. On a watch, those seconds completely define the user experience.
For the most part, the watch is best used as a notifier, and the only third party support necessary is to surface the same notifications as the iPhone lock screen. That part is easy from a design perspective. The other half of the watch experience is having access to control the devices and services you use daily immediately without distraction. What’s tricky is that while there are likely dozens to hundreds of possible experiences people wish to make immediately available from their wrist, for each person, there’s only a handful of things they individually would like to control. For some, immediate access to starting a workout is a must, and for others surfacing that option would only serve to shame them as they search for a method to control their TV.
Thus watchOS 3 did something which had never happened in iOS, allowed the user to prioritize which apps received CPU attention. The dedicated button (one of two on the device) previously solely for interacting with messages now allowed you to access 10 apps the user was allowed to select which would be kept in memory at all times and updated as frequently as possible, going so far as to live update their content from the app drawer menu.
Meanwhile the user interface was actually brought more in line with iOS. Swiping up from the bottom of the screen used to display Glances, intended to give the user important info quickly which the new app drawer does much more effectively, but in watchOS 3 it brought up control center just like on the iPhone and iPad. Swiping from the edge of the display now allowed you to quickly flip between various watch faces just as you could swipe between home screens. This new functionality allowed the user to setup different complications in each watch face, and so the most enterprising watch wearers used this as a method to surface the exact few services they would desire in a given activity.
+ watchOS 4
Which brings us to where we were Monday when Kevin Lynch came on stage to announce watchOS 4. In a move which would serve as a harbinger for the entire presentation, he started by saying one thing that actually said far, far more if you were paying attention. He wanted to share three new watch faces with us, a Proactive Siri watch face… He continued from there, but this little side note of a watch face is actually the future of the platform and will likely be the starting point of watchOS 5. In watchOS 3, the user was responsible for picking which apps received priority, and using the watch faces and their complications, was able to situationally adjust which functions were most accessible. But the watch is fundamentally a passive user experience, and so theoretically it would be far superior for the watch to predict and surface what is want without any interaction necessary. This is precisely what Proactive Siri addresses and bringing it to a watch face is one of those solutions so obvious I’m a bit embarrassed I didn’t predict it a year ago when it showed up as a widget on the iPhone. In my mind I was thinking I wanted the watch to change which watch face was presented to me based on my location. A better solution would be that my watch face would dynamically change to provide the content I need when I’d want it. As of right now, this is presented as a card on a dedicated face, making it easy for me to scroll through a variety of options as Apple learns how to guess what I want. The downside of this interface is you lose the ability to have an analog clock. Hopefully, somewhere down the road, this will lead to having an analog watch face and Siri is changing the complications available based on what she predicts is useful, but this could only work if she can guess the right thing pretty much every time within 2-4 guesses. Patience. Only time will tell how good Siri is at predicting what I want in her watchOS 4 embodiment. Smartly, Apple presented this in such a way that no one would think to pick apart Siri’s proficiency as before you knew it they were showing off a bunch of pretty colors with their dazzling new user customizable kaleidoscope face, and then, before you could fire off any drug-related tweets, let the Toy Story stars steal the show with their introduction to the smallest screen. If getting Mickey and Minnie was a big deal, these guys represent Disney’s far more relevant IP, so that partnership is no joke. Playing this out, will Leia and Darth Maul one day be gracing my wrist? Regaining consciousness, I think its extremely telling that there were 0 other interface changes to watchOS. Apple has found something that works for now and is back to their M.O. of refinements. As I’ve already made the case for, the next big interface change of watchOS is likely Siri. For now, the big thing is increasing what other devices the watch can control and interact with, and so that was the entire remainder of the watchOS 4 presentation. Apple has learned that Activity and Music are to Watch as Messages and Maps are to iPhone. These are the two things which basically everyone uses where the watch delivers an experience no computer before it could match. With it’s newfound focus, these were the two apps highlighted during the keynote. Activity gets all sorts of fun stuff like personalized motivational notifications prompting you to go for that one last stroll to meet your daily goals, or letting you know you’re only two consecutive runs away from a week straight. Apple is starting to understand how much people are highly incentivized by achievements and teased that the animations will now get more intricate and inspired the more challenging the goal reached. The watch is starting to become more intelligent about workouts, breaking sets automatically when the user rests at the edge of a pool, and giving statistics about pace per set and stroke type. They added a new workout mode, High Intensity Interval Training, which has been sweeping across the athletic world like wildfire the past 10 or so years. Assuming that people who workout often do a lot of workouts, you can now start multiple workouts in succession, allowing you to simply start recording your bike ride when you get out of the water, a big deal for triathletes and the like. I mentioned that music was the other app that got attention, but it was so fundamental to watchOS 4 the first updates apply directly to activity. You can now choose a playlist which will start automatically when you begin a workout, and music controls are now natively displayed in the Workout app itself. As for the music app itself, it’s gotten a whole new design, supports multiple playlists and automatically syncs the music you’ve been listening to recently on your phone at night when it is charging. You can also easily add individual albums to your Watch. All of this is absolutely genius and turns the Watch into the premier iPod in many ways until iPhone Music app can get it’s act together, again with that patience. There are a bunch of other refinements that Apple didn’t have time to mention. There’s a news app; a flashlight accessible from Control Center; tips are now presented during the lengthy initial pairing process; the app drawer is now displayed vertically allowing control from the Digital Crown; a dialing pad for making calls; along with a variety of minor interface improvements increasing accessibility. The watch even will celebrate your birthday with you.
+ Developer Time
Yes, Apple decided to name their newest OS High Sierra for, as far as I can tell, the sole purpose of setting up Craig for a nice “edgy” dad joke. In general, Apple is extremely aggressive about cannibalize their own products and services. After what felt like a decade, but was actually only a few years, of hearing about all the potential iPod killers, iPhone was the actual death knell to the platform. They release phones approaching the size of their tablets, and have no problem killing an app or framework to achieve their goals. Even the apps that survive get ground up redesigns every so often so they can support the newest technologies. The impact of this is that every time Apple starts over, there instantly becomes all sorts of low hanging fruits for the company to address. When products get to the point of true maturity, it often is a sign that Apple is no longer in active development, there’s just nothing of importance for them to add to the category at that point. macOS is in a very strange place as it was built on such solid foundation which had already been refined for desktop computing for 30 some years. The underlaying framework was so solid, in fact, that the OS has spawned a variety of flavors which are all developed to acutely address the things for which macOS is not perfectly designed for, which it turns out, is the VAST majority of tasks. Thus macOS has never come to a point where it would make any sense for Apple to consider a ground up rebuild. They can simply slowly upgrade each component system independently and adjust the interface to keep up with each generation of design sensibilities. If ever a new task were to come up which macOS would need to be redefined to properly address, it really just means Apple will make a new distro of OS which will better address the use case. There was concern for awhile, particularly back in 2015 with the software “it just works” crisis, that iOS was actually meant to cannibalize it’s parent. Since macOS was not getting touch, and modern devices use touch, many reasoned that Mac was slowly riding into the sun. Only macOS works very well with touch, it just uses a different paradigm. Craig and Phil summed this up nicely during the live episode of The Talk Show for WWDC17, the Mac is centered around the concept of indirect manipulation. The user input is separated from the user display, and so there is a separate user interface for input and output on the Mac. Thus the keyboard is off screen and the touch experience is handled with the trackpad, and now TouchBar. The writing is on the wall that the future of the Mac is one display meant for output and one for input, but haptic technology will need to get drastically better for that compromise to not be so severe. This system is for all intents and purposes perfect for some specific scenarios. It’s hard to imagine a better device for writing a research paper or book, or writing the code that runs on desktops and servers. It will be a very distant future when those kinds of tasks would somehow be better served on a different medium. Even when I imagine things like a perfect virtual reality, I don’t picture it being easier to write a book with a different user interface paradigm such as voice or handwriting. Maybe telepathy? So yeah, the Mac has awhile. Which puts it in a strange place as far as future development. For most Apple products and services there are laundry lists of things users would like to see addressed, although with iOS 11 the fruits are starting to get pretty far off the ground. On macOS, there are absolutely ways for the OS to improve, and there will never cease to be new technologies for it to adopt and support, but the main concern for any die-hard Mac user would be “don’t screw this up”. So what do you do with a product which has effectively mastered it’s product category?
+ macOS High Sierra
+ macOS Under the Hood
All of this is well and good, but the big news in High Sierra are the underlaying technology improvements, starting with the official macOS launch of Apple File System. Thanks to John Siracusa and his thorough dissections of the underlaying file structure, I have literally been anticipating this day for 10 years. Sure, he didn’t get me really excited until snow leopard came into the picture, but what a drawn out romance it has been. For those of you who haven’t read Siracusa’s write-ups and are still somehow reading this, you really should go back and read what he’s written because even ten years later its fantastic. I won’t give much in the way of history as he’s done a far better job than I could. The biggest thing I’ll add is that since he’s hung up his keyboard for his mic, Apple has continued to do amazing things with HFS+, a ’98 update to the then 13 year old HFS, released when the internet looked like this, a search engine looked like this, and computers were mostly running this on specs like these. It was designed to work with the IBM processors Apple used back then and so all metadata, the information the computer uses as a map to find what you want, must be translated during any read or write operation completed with an Intel (and presumably ARM) processor. Actually, HFS+ wasn’t even designed to work with Unix, the backbone of NeXT and Mac OS, and so things at the most basic level such as file system permissions and hard links have been retrofitted to sit atop the storage structure. To put it succulently, in so many ways Apple has been refining Mac OS with one arm tied behind their back. All of the modern features that are the crux of their operating system such as Finder, iTunes, Time Machine and File Vault as well as specific features of their apps such as iCloud Drive storage optimization or versioning history in iWork are all built in direct conflict with their underlaying foundation. They’ve done masterful work to offer these solutions over the years, and for the most part, they’ve managed to do so in a way where features “just work”. This should all be much easier now. APFS is natively 64-bit just like every other part of the OS; it natively supports encryption, which will go a long way in ensuring nothing goes awry with encrypted drives and will result in drive encryption becoming the default in macOS if not mandatory as it is with iOS. Users will immediately see cool performance impacts such as files duplicating instantly. I suspect we shall be seeing some major ground up rebuilds of many core OS apps in the next couple years as Apple takes advantage of all the new toys they earned themselves. As an example, Time Machine today has to backup a new copy of any file which the user edits. For someone like me, who regularly is working with 8 GB audio files, this can quickly result in a couple minor adjustments resulting in a long backup and an endless desire for more backup space. With APFS, Apple will be able to only backup the parts of a file which have changed, thus dramatically lowering the space and time necessary to complete the task. In reading this, it would be easy to be confused why Apple has taken so long in addressing this issue. The reality is that building a new file system is a ridiculously involved effort which at its core runs the risk of destroying the most valuable and irreplaceable part of any users workflow, their data. It’s something you’re only going to do once every couple decades at most, and so when you release it there needs to be effectively no emerging bottlenecks even on the roadmap. So should everyone be worried that APFS might destroy their data? Well, when they’re upgrading, sure, always have a backup, regardless of something like a file system change. That said, functionally, Apple and it’s users couldn’t be more confident in this roll out as they already had a trial run on a larger set of devices. Something like 80% of iDevices are running iOS 10, and any running iOS 10.3 already converted all their data to APFS. Yes, one of many virtues of APFS is that it is designed specifically to run on the flash storage which all our devices rely on, and so APFS will be the underlaying file structure for all of Apple’s platforms from this fall on. There’s a bunch of technical candy such as native full disk encryption, TRIM and Sparse file support; nanosecond time stamps; fast directory sharing; write coalescing, snapshots, clones and space sharing which will allow the user to be far less involved in the process of saving and recovering files. The most common concern among file system geeks is the lack of checksums, an intensive mathematical calculation which can ensure the individual bits that compose your data never go astray, on the user’s data itself; opting to only ensure the metadata is verified. In theory this means that your data can slowly corrupt itself and the OS would never recognize it, simply backing up copies of the distorted data. Now, with a service like iCloud Drive, even one unhealthy drive could destroy the file across all your devices. In practice there is debate about whether this issue occurs with Apple’s hardware, and it sounds like it’d be a feature the Apple File System team can work on moving forward if its deemed worthwhile. This is perhaps one of biggest long term impacts of Apple designing their own files system from the ground up as they are now able to update APFS in anyway they see fit. Apple has planned to open the specification this year and it seems quite possible APFS will eventually be open source. If you want to read more about the virtues of APFS here are a few of my favorite links on the subject. Apple wasn’t done with file storage nerdery however, and announced it will natively be supporting the future of video compression, H.265 or HEVC, in both hardware and software. Moving along, Apple’s in-house graphic engine, Metal, is receiving it’s first major upgrade, inventively named Metal 2. Come on Apple, Heavy Metal was right there! Metal has already been incredibly successful among the developers who have chosen to implement it, improving speeds 10x over the old OpenGL standard Macs previously relied upon. Metal 2 is an additional 10x faster, bringing us graphics performance 100x faster than a couple years ago, not a bad performance curve. As Cillian and I discussed, now we want to see the real benchmark, how does Metal compare with DirectX 12, which powers Windows and is the default for gaming. Obviously such a comparison cannot be apples to apples, but its what we all want to see. Since the system window is now run on Metal, even older Macs should have significantly more responsive user interfaces at load. How is Apple achieving these gains? By slowly moving the GPU “up the stack”, reducing the number of operations which require the CPU and avoiding situations where the GPU is patiently waiting for the CPU before getting down to work. In super exciting news, Metal 2 will provide native support for eGPU solutions, As the MacBook line moves forward, they continue to push the limit of how little energy they can use while still providing a good user experience. To date, Macs have achieved this by sacrificing video performance which has proven to be a largely acceptable trade off. Sometimes, however, you are sitting at home at a desk with your laptop, have access to unlimited power and would like to play a game or do some other intensive task with the same device. With the crazy speeds of Thunderbolt 3, its now theoretically possible to dramatically improve the performance of the computer using an external graphics card. As Craig points out in his discussion with John, this is no small engineering feat. Computers have been built for the last 50 years under the assumption that CPU and GPU are completely static entities, and up until now, removing a GPU from a system entirely would result in immediate catastrophic failure. Apple has dipped it’s toes in these waters a bit as they have been offering laptops with both a high powered discrete card and a low powered integrated graphics solutions for almost 10 years now. At first the user was responsible to enable a discrete card themselves, but doing so required the user to log out, which for most situations is the same as requiring a restart. Shortly after, all of this has been done by the OS and is completely hidden from the user, but the system was always in control of when the additional GPU was present and when it was not. With an eGPU, the system needs to be prepared that at any moment, doing any task, the GPU it was actively using may just disappear, and that’s ok. This isn’t only important for the operating system as a whole, but also for any apps which may be taking advantage of the increase in power. Thus Apple is offering a dev kit which has a Radeon Pro 580 in an enclosure for $599 and includes a $100 credit towards the purchase of an HTC Vive. Metal 2 provides the underlaying tools which developers can use to support eGPU configurations and take advantage of all the additional FLOPS. Hopefully, this will be yet another reason to encourage Metal adoption amongst any developers who have yet to make the jump. You may have noticed that the dev kit included a $100 credit towards the HTC Vive, which brings us the first subdivision of sub-point 6 of point 2 of subcategory 4 of category 1 of Apple’s 3rd of 6 main topics (how ridiculous does that tvOS announcement look now?). Yes, Metal 2 is primed with specific optimizations for virtual reality (VR) development. Apple announced they have partnered with Steam and are bringing the SteamVR SDK to Mac. Apple partnered with Epic and Unity to ensure their engines are prepared to power any AR/VR development on the Mac platform. It’s hard to overstate the importance of these partners. Steam is far and away the default distribution network for PC gaming. The Unity engine powers about half of all PC games, 60% of VR and 2/3rds of all mobile games. Meanwhile, Epic’s Unreal engine powers the most popular games available, so in combination these two players basically define the industry. This is certainly one of those areas where only time will tell, but Apple has de-prioritized gaming for so long that these announcements come in stark contrast to the years past. Will Macs finally get to experience games at something other than lowest settings? Well, its not entirely coincidental that the driver experience has been so poor on the Mac, there simply hasn’t been powerful enough video cards available in the devices to justify much attention. No amount of High Sierra magic is going to suddenly start pushing these polygons, and eGPUs will always have a much smaller user base than the greater Apple population.
+ iMac: The Startup Charm
As I mentioned several thousand words ago, Apple has been unable to update the iMac for 600 days, and combined with the state of the Mac Pro and the available configurations of MacBook Pro, that timeline was feeling even more egregious. Apple doesn’t tend to announce hardware WWDC, and when it does so, it’s a sign the device is intended for developers and other high demand users. So right off the bat, Apple announcing iMac updates on stage, rather than just a silent update or separate announcement, was a good sign. The iMac is evolving into a giant floating display, and so thats where the presentation starts. The 4k and 5k displays each got significant upgrades, and are now 43% brighter (500 nits!) and as the P3 implies, can display over a billion colors, but now with fancy 10-bit dithering. In practice this means the displays pop across pretty much the entire spectrum of visible light, even in a brightly lit environment, and the color options are approaching the limit of what humans can perceive. The 10-bit dithering also helps avoid any “color banding” that’s always so easy to spot and despise. I had an opportunity to see the display in person, and it was immediately noticeable, even from the desktop, with no older model around to compare. It’d be easy to miss because the old displays do look fantastic, but the detail in the shadows of the wallpaper are unreal. If you are running Sierra, turn the brightness of your display to max and look at the snow in the shadows of the mountain’s peaks. No matter how new your Mac, there is effectively no detail to that snow, especially if you are sitting a natural length away from the display. If you were to zoom in, you’d discover the snow actually has a ton of detail in those shadows, with pits and valleys of all sorts. On these new iMacs, all this is immediately obvious, even standing several feet away. This display will be very honest about the quality of anything you put on it, and things you thought had blended nicely may suddenly appear far more severe. Similar to when the displays first went retina and so the web suddenly looked blurry, pictures don’t look blurry but details likely no one ever noticed are suddenly front and center. All in all, this is great, the people who buy these iMacs are often the ones editing our content, and so in the long run all of our pictures will simply be edited better. Things like movies and any digitally created image where the details are already optimized to this level of fidelity are going to flat knock your socks off. Fundamentally, this iMac display is likely the best available to consumers on the planet. Apple is dropping the latest generation Intel chips, Kaby Lake, across the lineup. They have a couple nice graphics bumps, but effectively they are nothing special. For years Intel was doubling the number of transistors available on their chips every 2 years, a pattern colloquially known as Moore’s law, and the power of our computers was doubling in the same time frame. These transistors are now only several atoms wide, and so fundamentally getting them much smaller is very difficult. Thus processor companies must get creative in how they continue to push improvements forward. 10 years ago in 2007, Intel adopted what they called the tick-tock model where each year their chips would get updated as either a tick or a tock, and in each tick the die would shrink (smaller transistors) and for each tock they would bring entirely new transistor layouts which offer refinements and optimizations to the same size chip, commonly referred to as new microarchitectures. Intel’s current chips are 14nm. They had initially promised they would be shipping 10nm chips in 2015, but that got bumped to 2016, then late 2017, and are currently slated for sometime in 2018. As you can see, Intel is really struggling to crack that 10nm barrier, and there’s some very fascinating reasons as to why, although that will have to come in the form of a different article. For now, as it pertains to Apple, Intel’s inability to deliver substantial updates to their processors has been a major issue for everyone, and has certainly been a contributing factor to the inconsistent updates to the Mac line in recent years. Kaby Lake is slightly more power efficient, allowing it to run at slightly higher speeds than previous chips, and so performance is expected to jump about 10% simply due to higher clocking rates, certainly nothing particularly noticeable. Apple has finally decided to modernize their desktop memory and while all configurations start at 8GB, the 21” iMacs can now be maxed out at 32GB (up from 16) and the larger 27” iMacs can go up to 64GB (up from 32). This is a tremendous amount of RAM for a machine running macOS. Not only is there more of it, but Apple finally has decided to spring for the best RAM these chips support with 2400 MHz DDR 4 in all but the lowest configuration 21” which still gets 2133 MHz DDR 4. Even the top end 27” iMac only offered 1867 MHz DDR 3 in the previous generation, so this is a substantial improvement. As an added bonus, all of the RAM is user upgradable, even in the 21”.
+ Storage: The Great Bottleneck
For much of the 2000s, storage drives basically came in two speeds 5400 RPM and 7200 RPM. Hard drives look a lot like tiny automatic vinyl players, the platters look like little miniature CDs which revolve 5-7k times per minute and an arm with a needle, just like those record players, managing to make contact with the platter hundreds of times per second, here’s an awesome video of the whole operation slowed down. Computers worked by storing all the users data on these slow drives, and then moving the stuff the user was actively working on to it’s much faster memory. With the MacBook Air, Apple started bringing the technology they had made so accessible with the iPod and iPhone to the Mac, flash storage. Flash memory doesn’t have any moving parts and so it takes up far less space, uses far less power, and runs much faster. It also costs much more for the same amount of storage. When Apple first started bringing flash storage to the Mac, these minuscule flash chips were soldered onto little boards which would connect to the motherboard. This forces all those flash chips to be served by a single input, like a tiny metaphorical 2 lane highway. Apple now solders the chips directly to the logic board which allows each chip to be written to simultaneously, a huge performance boost at the expense of upgradability. Since more storage means more flash chips soldered to the board, the more flash storage you put in a Mac or iPhone, the faster it responds. Apple has been finding huge performance and usability wins by going all in on flash storage for the past decade. Where hard drives worked at around 100 MB/s; Apple’s latest SSD configurations are crossing 3 GB/s. While Intel is limping along with 10% gains per year, user’s storage has gotten well over 30x faster in the past decade. Graphics cards have continued to draw massive amounts of power which until recently were only useful in games and in some specific academic circles. By putting faster and faster SSDs in their computers, Apple was able to push forward towards an instantaneous user experience while simultaneously requiring less and less power allowing for longer lasting, cooler running, thinner devices, all HUGE usability wins, at the sacrifice of playing games and appeasing those specific researchers. Thus SSDs have quickly become one of Apple’s strongest differentiators for the Mac. iMacs are still frequently used to house the entire family’s data and terabytes of flash storage is still expensive, so the iMac’s have gone fusion basically across the board except for the non-4k 21”. Fusion drives pair one of Apple’s extraordinarily fast 32 GB SSD and a larger 1 TB standard drive, or more commonly, a 128 GB SSD with a 2 or 3 TB standard drive. For the past few years, Apple has been doing a lot of magic to ensure that all of the files you use most frequently are kept on the SSD and only the files you rarely touch remain on the much slower drive, and with APFS all of that should be vastly easier to maintain. Since most people buy standard configurations, the switch to fusion drives is a huge deal. For anyone who has only used a computer with a standard drive still, the first time you make the switch is delightful. It’s hard to describe what 30x faster looks like, here’s an audio file played at 2x speed and 3x speed. Another way to think about it, if your life was sped up 30x, each day would be a month, and a year would be 12 days. For users who have more money than time, you can now configure these iMacs with up to 2 TB of pure flash glory. All of these improvements are incredibly welcome, but they are just refinements to an already strong machine. As has been alluded to several times already, the real issue has always been graphics performance, and particularly at a developers conference, people need to see some improvement. So what’s up?
+ Graphics: Show Me Something
Apple has not invested seriously into graphics performance on the Mac basically ever, but certainly for the past 10 years. While this has been an acceptable-if-painful tradeoff for most, the past two years have found graphics cards coming into a whole new realm of usability. Once mostly only for pushing more frames per second of headshots, graphics cards are much better suited for all the latest buzzwords like machine learning, natural language processing, AR and VR than CPUs. For a platform which has always been a favorite amongst designers and many types of engineers, it is no longer possible to get by with a mediocre video card. Apple stepped up to the plate in a big way, improving graphics substantially across every configuration, For the baseline iMac, this is actually all thanks to Kaby Lake as Intel’s latest generation not only has much better integrated graphics solutions, but also pairs their more powerful Iris Plus 640 instead of the Iris 550 of the previous generation. No power house, the Iris Plus 640 can at least run some games, and Apple is claiming an 80% increase in graphics performance over the 2015 configuration. Despite Intel’s greatest gains being around graphics performance, Apple has smartly decided to drop discrete graphic cards into all of the 4k and 5k options. At first glance, this is exciting, but then you realize that even the lowest configuration of 4k still gets a desktop class Radeon Pro 555 with 2 GB of GDDR5 memory on board. As for the 27” iMacs, they start with the Radeon Pro 570 4 GB and max out at the Radeon Pro 580 8GB which Apple claims will net you 5.5 teraflops (a common measure of the number of calculations a card can compute per second). With all these fancy video cards every iMac is outfitted with the same 4x USB 3 and 2x Thunderbolt 3 as well as keeping the SD card slot and ethernet port. Some people will bristle with Apple choosing to go with AMD instead of Nvidia, whose GTX 1080 line has basically set the standard for modern AR/VR performance. While Nvidia has basically been uncontested since ATI was merged into AMD, the newest Radeon cards are absolutely no slouch. But really, all these specs are just speeds and feeds, and what actually matters is the user experience, and for our purposes at WWDC, the experience for a high end virtual reality developer. When you’re Apple and you’re looking for someone to demo VR development, there’s apparently only one option, John Knoll. Yes, the man who co-designed Photoshop with his brother only to move into cinema where he started Industrial Light and Magic to create such films as The Abyss, Avatar, Hugo and all the modern Star Wars movies. So, about that Star Wars watch face… Anyway, I have skipped talking about most of the demos so far, but this one is worth a link. That video is nothing to sneeze at, with the developer designing live in a virtual environment, the iMac powering her HTC Vive, the projection system, and most likely the internal display (although I guess it’s feasible they did some custom configuration behind the curtains, the machine can power 2 60Hz 4k displays so it seems unlikely). As the video shows, the performance is flawless, displaying the 90 frames per second VR requires.
+ Macbook: The Air Apparent
Apple spent about as long discussing their updates to their MacBook lineup as they did Amazon Prime streaming. Despite the lack of stage time, there’s some nice little updates tucked away here. The MacBook is becoming a true Air replacement in everything but the price point. As mentioned with the iMacs, Kaby Lake’s biggest improvements are around graphics performance, and the MacBook gets Intel HD Graphics 615. Reviews of the machines are just coming in but some games should be playable. Macbooks can now be configured with up to 16 GB of RAM, although still of the power saving 1866 MHz DDR3 variety. The bigger disappointment is that Intel is not supporting Thunderbolt 3 yet with these chips. SSDs get faster as always, but the surprise is that you can now configure the MacBook with a dual core i5 or i7. Apple hasn’t provided the various flavors as test units for us to get any firm numbers, and as this thread shows, guessing how these chips compare is complicated, but it’s a good sign for the future of the platform regardless. The MacBook Pro was just redesigned in December, so this quick refresh is a potential sign that Apple may be aggressive with updates moving forward. The body is obviously the same, although the keyboard got a hair more travel and the return of some old friends. The updates to performance are mostly incremental, although some things like native hardware acceleration for H.265 will lead to noticeable real world impacts. For the 13”, the integrated graphics goes from the Iris 540 series to the Iris Plus 640, and for the 15” the discrete cards get bumped to AMD’s Radeon Pro 555 or 560 with 2 or 4 GB of GDDR5 RAM from the 450 series in the previous generation. The benchmarks show a jump from 1.3 Teraflops to 1.8 Teraflops for these cards, certainly nice, but not enough to be powering any form of VR. Finally, to round up the Mac lineup the MacBook Air. As Craig joking framed it with John Gruber, they decided to drop a few more MHz in the box. Yes, the 13” Air of has replaced the 13” MacBook Pro fat circa 2013-2016, its there, but that’s about it.
Yep, with the awesome new iMacs and these pleasant little bumps to the laptops, we’ve already covered more hardware than should be expected at a developer’s conference. Also, as John Knoll showed, the “pros” have been well addressed with that maxed out iMac configuration. Time to bring on topic number 4!
+ iMac Pro
Wait, what is this black magic you speak of? Yes, the iMac has gone full Vader, bringing the space grey aesthetic to the big screen as well as to the Mac’s wireless input devices. And it can’t be a keyboard for a “pro” unless there’s a 10-key variety. Developers and spreadsheet wizards everywhere cheer as everyone else goes back to eyeing the finish. The Pro runs far more than skin deep, and while the enclosure looks identical to the standard variety, the internals have been completely reworked, providing 80% greater cooling capacity which allows cranking up that power supply to support a whole slew of components no one would dream could fit in the iMac’s slender frame. For starters, the iMac Pro is getting Intel’s premier Xeon line with a baseline of 8 cores, and options for 10 and 18(!) cores. As we’ve seen, Apple has a new appreciation for graphics, and so AMD will be providing their all new generation of Radeon Vega cards, with Apple promising options for 11 TF/s of single precision, with the ability to do 22 TF/s at half point precision. For comparison, the just announced Xbox One X, will run around 6 TF/s. All machines start with 32 GB of ECC RAM, but users can slam an obscene 128 GB of that ECC RAM in there if they are looking to virtualize a couple hundred machines (joking, maybe). Starting with a TB of flash storage, the machine sports the same 4x USB 3 and SD cards as the standard iMac, but offers 2 more Thunderbolt 3 ports for a total of 4 and, for the first time on the Mac, offers a 10 GB ethernet port. The machine wont ship until December, so exact details are still very sparse. We do know the starting cost is a mere $4,999. This may sound like a pretty penny, but even DIY rigs are only able to save you $300, and you are getting vastly inferior parts in several key areas, such as the display, the storage and the I/O. As for getting these specs from a standard PC supplier like HP or Lenovo (both already more expensive before including a monitor), the cost gets pretty steep. So it seems like you’re getting a pretty astounding bargain considering none of those options provide macOS or Apple’s world class, world wide support. Things like AppleCare are an absurd value when you consider the cost of the components in that beast. Speaking of AppleCare, Apple didn’t mention it on stage, I’m guessing since it was an international presentation, but Macs now join the AppleCare+ party. Amazingly, this adds $0 to the cost of AppleCare on desktops and only $0 on 13” MacBook Pro and the MacBook, and $50 on the 15”. With it, a 2 and half year old liquid damaged MacBook Pro will get repaired for a mere $300. AppleCare has always been a great value, but this is next level.
+ Recap: Forgotten No Longer
All in all, some truly exciting updates to the Mac platform. The laptops could certainly stand to get more graphics oomph, as we’d probably like to see the current MacBook Pro integrated graphic performance in the MacBooks, and for the discrete cards in the MacBook Pro to rival those in the iMac; but those issues are clearly less Apple and more to do with AMD/Nvidia/Intel providing the performance we want at the wattage we demand. It’s easy to think that it was the iMac Pro that John Knoll demoed, but that thing isn’t real yet. For now, it’s clear that the professions who have relied upon the Mac for their livelihoods will have viable hardware options provided by Apple. Questions still remain about how frequently Apple will update these devices. Graphics cards update on a 6-12 month cycle while it wouldn’t be surprising if there aren’t significant updates to Intel’s Xeon line for 2-4 years. If Apple is going to play in the video performance game, they will likely need to be willing to update which graphic cards are offered more frequently then they can update their processors. macOS has seemed to be in a healthy place for the last few years already, but upgrades like APFS and Metal 2 with it’s eGPU support are likely to quickly result in absolutely awesome new options for users. It’ll be easy for people to struggle to explain why you’ll want to upgrade to High Sierra other than why not, as Apple promises universal support for any apps which run in Sierra, but the end results will likely show High Sierra to be as important from a underlaying frameworks level as Snow Leopard and Lion combined Despite everything that got announced, that entire section took up a mere 30 minutes of stage time. That’s right, in 30 minutes Apple announced and demoed their new desktop OS, refreshed their entire iMac and notebook lineups and introduced the world to a whole new type of iMac and trotted out John Knoll. Somehow, not a single moment of it felt rushed.
+ iOS: Childhood and Development
Tim decided to cash in on the most obvious joke in western society making a Spinal Tap reference with his introduction of iOS 11. Apple’s website is even getting in on the fun at the time I’m writing this. All this nonsense and we still didn’t get Heavy Metal. Sigh. An operating system running on the iPhone was sprung on the world over 10 years ago now, and while it didn’t do much, likely no first generation devices have ever nailed the user experience so solidly as it basically had a four card deck of aces. At the time, it was so basic, Apple didn’t even bother to grace it with a name. It wasn’t until it got a sister that it was named iPhone OS, and then gracefully handled a name change as a toddler to iOS 1 to accommodate the iPad. iOS 1 ran 15 native apps and had no SDK. There was no ability to sync data from anything except via USB from a computer, minus manually going to a website and downloading something designed specifically to be run on this fledgling platform. The one thing you could have the system automatically update for you was email, but this had to be initiated phone side and could not be pushed from a server, requiring the user to choose how often the iPhone should use precious battery milliamps to wake itself up and check for new mail, or for the message to only arrive when the user thought to look in the app. Apple added double tapping to the home button mid-cycle in that first year, and for the life of me I couldn’t remember the action it was assigned. iOS 2 brought an App Store and support for Microsoft Exchange and 3G/GPS radios. That was literally the entire feature set for launch. Apple did release a couple more features along with a much longer list of bug fixes 2 weeks later. 10 years later, all that iPhone money sure does wonders for these conferences, and while I appreciate all the new toys we get, I could’ve gotten through that write up in like 5k words, I swear. iOS 3 brought copy/paste, spotlight and push notifications. iOS 4 brought wallpapers and multitasking for music and navigation apps. iOS 5 is where things finally start rounding into form with Siri, iMessage, iCloud and Notification Center all making their debuts. Most importantly, iOS 5 brought the ability for the iPhone to update wirelessly, separate from iTunes. Combined with iCloud, this was the first time an iOS user could realistically not own a traditional computer. Notice this was the first iOS to support iPad from day 1 and shipped on the ridiculously popular iPad 2. After the huge party that was iOS 5, iOS 6 was basically a giant letdown. Maps was the big show, and while I love the app and use it daily, it’s launch will forever live in infamy. It’s also likely one of the most formative moments of the modern Tim Cook era Apple we see today. In response to such a public failure, Tim decided to do some reorganizing. Similar to when Steve Jobs refocused the companies product line with his desktop/laptop/consumer/pro grid; Tim decided to make Jony the head of design, Craig the head of software, Bob Mansfield the head of hardware, and Cue the head of Services. This new setup shed a previous C level position which drew a lot of attention as the man who moved on, Scott Forestall, had been the head of iOS and whose most recent baby was the launch of the beleaguered Maps. I love Maps and use it exclusively these days, but the launch is easily Apple’s most public failure since Jobs returned, and for a company as newsworthy as Apple that’s no small feat. It’s very easy to get lost in how directly the two are related, but I think Apple’s newsroom team did as good a job summing up the decision as possible with this extended title: “Apple Announces Changes to Increase Collaboration Across Hardware, Software & Services”. The streamlined divisions simply make far more sense, particularly with the benefit of seeing Apple continually expand it’s efforts into new product lines. Having a separate executive in charge of iOS and macOS doesn’t make sense as watchOS, tvOS and this currently esoteric but still evolving Siri OS that exists with AirPods and the eventually-to-be-discussed HomePod continue to gain steam. The 5 main components of Apple’s device strategy are hardware, software, design, services and the oft forgotten retail\AppleCare; so each of those concepts gets a direct senior. In this light, I guess the decision was between Forstall and Federighi, and it’s commendable Tim was willing to go with the guy from his venerable Mac platform rather than the sexy new most profitable platform in history. With all this going on, iOS 7 was already garnering a ton of attention. Rumors were flying that the OS was going to see a redesign with Jony immediately asserting his vision of what the software that graces his hardware designs should look, feel and sound like. Apple was able to keep what this design would look like a complete secret, and so the demo on stage was a ton of fun. In practice, iOS 7 was in some ways a modern Apple’s version of the classic Mac OS to OS X conversion. If you want to learn about that, John Siracusa has the content for you. As someone who foolishly installed the iOS 7 beta on his device simply because I was just too curious, the first three versions or so were analogous to the OS X 10.0 Cheetah, 10.1 Puma, 10.2 Jaguar releases sped up to a matter of weeks. The company had clearly come a long way. While Apple did an astonishing amount to polish iOS in the few months before iOS 7 was released, this was very much a hasty preview of what Apple’s new team had in store for us. It has taken years for Apple to address all of the various underlaying elements, but in so many ways these last updates to iOS 10.3.x have felt like the final point rounding out the change. Apple has now had a chance to go through every single core app’s functionality, from Messages to Photos and Calculator; every underlaying framework from APFS to CoreWhatever; every service from iCloud to Siri and the necessary new ones like News, Apple Music and Apple Pay; and consider what these should look like in the modern era. Even the privacy experience seems addressed from the clearer webpage to the much, much cleaner interface and workflow for 2FA and forgotten passwords. Thus, in pretty much every way, iOS 11 feels like Apple’s chance to start the second generation of this team’s vision. Public versions of the company’s recent hires seem to fit this theme, bringing on top tier outside support to help rollout this next generation of iOS. With all this background, let’s start talking about what got announced!
+ iOS 11: All Grown Up
Craig ran back out on stage to breeze through his latest mobile OS. Following the structure I highlighted earlier, Apple starts by going straight to the most used app, messages. After last year’s dynamic feature packed release, Apple apparently isn’t ready yet to ship a next generation of iMessage, and so this version is refining and cleaning up some of the elements added last year. The biggest news is actually on the services side as iMessages now sync with iCloud, meaning that all your devices should always show the entirety of all your conversations, and any new devices you setup can import your entire iMessage history with iCloud. As far as I can tell, there is no indication that SMS messages will receive this same treatment even if you have forwarding turned on. Messages now parses flight numbers and links to live flight information. The app drawer has been redesigned to be a bit more focused and is a likely necessary step to improve the awareness and usage of these apps’ functionality. Minor iterations. This actually lead to Apple’s next major improvement to Apple Pay, peer to peer payments, which are facilitated from messages. Apple Pay is always interesting and customers now have an Apple Pay card which can accumulate and store money. This is the closest Apple has ever come to acting as a customer’s bank, and it’ll be interesting to see if this is just a structural anomaly or if Apple has intentions to explore the space. For now, however, you can use this money with Apple Pay merchants or transfer it back to your bank, and with so many banks supporting Apple Pay, it seems likely that process may very. They didn’t mention it on stage, but this spring Apple will be launching a new service called Business Chat with iOS 11. The idea here is that users will be able to start chat conversations directly with any business from within Maps or Siri which will open in Messages. These conversations will be persistent, allowing them to slowly transpire over time. All sorts of information can be shared back and forth and could be used as another way to authenticate a user. All of this communication is completely initiated and controlled by the end user. Siri was next up on stage, featuring new voices for male and female, presumably across the various languages. On stage we only heard the American English variations, and the demos sounded lovely. I’ve been using the British Female Siri, and even in the beta the voice is noticeably more natural, particularly when speaking extended sentences. Siri gets a new interface, allowing you to converse with her via text, and will display content more cleanly even providing multiple results; and she should be getting the ability to answer follow up questions although what that looks like has not been described in much detail to this point. The big demo is that Siri is now going to start translating between languages. Its interesting that Google Translate works between over 100 languages but only offers Google Assistant in 8 languages, while Apple offers Siri in 36 languages and is only just now exploring translations. While US Siri will start with Chinese, Spanish, French, Italian and German, Apple hopes to expand on those languages quickly. For those trying to learn one of these languages, the ability to ask aloud “Hey Siri, how do you say ‘This keynote is too damn dense’ in Italian” seems like a complete game changer. SiriKit is adding new question types such as the ability to work with payment accounts, lists or notes and visual codes. These interactions are starting to look fairly complex yet the desire from developers is inimitable and competition in the field is getting pretty fierce. While these additions are nice, they are not going to be enough to change the perception that Siri is behind. In terms of Siri’s potential effectiveness, her newfound ability to apply on device learning and predictions across apps should be a big step in competing with current evolutions of Cortana and Google Assistant. Siri is now aware of what you are doing in Safari and if you’ve been looking up Scandinavian countries, Siri will be predicting that you will likely be talking and typing about them in the near future in other apps such maps, news or messages. Siri’s picture of you will now sync across devices, meaning that actions you take on your Mac will inform what she thinks you’ll be doing later on the phone, which will likely dramatically improve her ability to surface the right information. Additionally, in order to sync this information privately, all of this is a part of your iCloud personal data and so Siri will no longer be starting from scratch when you setup a new device. This is actually a major little nugget Apple briefly touched upon during the keynote, as it seems they have finally cracked how to ensure the user is able to maintain complete control over their personal information while still having it be synced and stored in the cloud. To achieve this, all of the information is stored as an encrypted file on Apple’s servers. This file is complete nonsense to any computer unless its also provided the user’s AppleID password. Apple itself provides no way to use that password unless the user can also authenticate from a previously established device via 2FA. The authenticated devices each store this password and thus are able to parse it out themselves. This has several major impacts. For one, no amount of change in Apple’s future ethics would allow them to suddenly crack open your conversations and share them with prying eyes. For things like Siri prediction, each device only knows the information which has been verified by the user previously and then makes deductions from there on its own. When a user does ask Siri a question, the request is sent to Apple using a one-time anonymous Siri ID. Thus Apple is able to learn from the requests in aggregate, and the user’s Siri profile is able to learn from and sync all of the user’s behavior; but Apple themselves ties none of this information to a user. For now, Apple is storing the encrypted audio clips for 30 days, but since this information is partially randomized; even if it were somehow decrypted and shared, there’d be no straightforward method to working out who asked what other than listening to it and playing “Guess That Voice”. Big picture, this seems like it might be the end of the days that Google or Amazon are able to collect more data about their users which they can use to provide a better experience. With this setup, Apple should be able to collect the same data, and the combinations of anonymous IDs and differential privacy to transmit the encrypted data seems to ensure user privacy against even the theoretical possibility of “hacking” the data. With 2FA, now mandatory with iOS 11 and Mac OS High Sierra, even the weakest link in the chain, the human, is effectively safeguarded as phishing attacks cannot gain someone physical access to an authenticated device. The only real question mark would be scenarios where a user lost access to all of their devices simultaneously, which is an increasingly difficult scenario to find yourself in; and as wearables evolve will likely become practically impossible. Apple shared that it’s users are now taking a trillion photos a year, and film cameras peaked at around 80 billion in 1999. Introduced last year as a way to enjoy all of these photos, Apple is improving the memories function with new topics and the ability to auto-adjust playback for portrait or landscape. With everyone taking all of these pictures, many of which are actually small videos, and with options for things like pano, slow motion and 4k means that our planet is slowly becoming storage for our photos. Apple is supporting H.265 across iOS as well, and taking a page from Richard Hendricks, is even extrapolating on the format to make a more efficient version of still photo capture it’s calling HEIF. When users share these pictures they will be automatically converted to the standard .jpegs we’ve grown to expect. H.265, or HEVC, is the successor to H.264 and reduces file sizes by about half from its predecessor. The improved compression comes at the cost of more CPU cycles to decode it. Apple has decided to forgo using any GPUs to natively decode the video, presumably mostly to preserve power. In theory this could cause older dual-core laptops to struggle with these videos, particularly if trying to multitask. Only the last two years worth of MacBooks, MacBook Pros and iMacs will provide hardware-accelerated support, and only the Kaby Lake machines announced on stage will do so for HEVC 10-bit color videos. HEIF is essentially a short video that can be displayed as a photo, making it a terrific container for Live Photos, and in theory, could be better than .GIF for it’s purposes. In practice this will allow for things like bursts and Live Photos to take up much less space at a higher quality and be far more manageable. Converting the file to .jpeg should minimally impact the quality of shared photos. The hot new feature in Apple photography is undoubtably the Portrait mode, which sees substantial improvements including support for low light using the true tone flash and can now even apply the HDR and depth effects simultaneously. Perhaps even more excitingly, Depth is now a public API and so developers will be able to start using the information to manipulate our photos. Look for apps like Prisma or Snapchat to be able to take their filter games to a whole new level. While some users have balked at Live Photos, I’ve never understood why you wouldn’t bother to leave it on by default, and the new features seem to make that only more obvious. Users will now be able to flip through the various pictures taken during the live shot and choose the key photo, which makes it a no brainer for any action scenario. In addition, you’ll be able to apply effects such as bounce and loop which have become so popular (RIP Boomerang), and perhaps most impressively, you can use it to turn any one of your shots into an artistic long exposure. Since by definition you’ll never know when you’ll want to take a picture of something happening fast, and these modes will make the advantages of leaving Live Photos on more obvious. Apple is scrapping their redesigned Control Center from last year and going back to a single card to display it all, and there was much rejoicing. Apple seems to recognize Control Center is a rather advanced feature and is moving forward with an icon only-interface. These icons can be more functional than before, too, such as the volume and brightness icons doubling as sliders. Further controls can be accessed with force touch/long touches, and users now even have the ability to customize some of the options available with a bunch of new options like a screen record button, a huge win for people who bring you all these reviews and How-Tos. Notification center has been merged with the lock screen, and so when you pull down from the top of the screen now you simply appear to be returning to the locked environment. People are already complaining, and I’m sure most will be initially put off by the change, but I think this makes way more sense. It was super rare, but some specific notifications would show up on the lock screen and not notification center. Since the lock screen gets bypassed so quickly with TouchID, when it does happen, it’s very frustrating. With rumors flying that Apple will be moving to iris scanning, it seems like it could become impossible for you to look at your own phone and have it remain locked, if not later this year, in the near future. With that being the future of iOS devices, it makes sense to move to a model where there’s no distinction between lock screens and the notification center. All in all, I think this will be one of those moves that will seem obvious in hindsight and the initial mental discomfort will pass quickly. Maps is continuing to move beyond needing to worry about getting you the right directions and can focus on the experience within the app. Airports and malls in major cities will now show detailed internal floor plans with support for multiple floors. Speed limits are now shown for the road you are currently driving down, and lane guidance now graces the top of the display. Everyone likes to talk the biggest game in town about how much they hate people using their devices while driving, the Do Not Disturb While Driving feature will gain the most attention. The feature seems to be implemented somewhat elegantly, and after the first time the device recognizes you are in a car, either because you’ve paired to it via Bluetooth or due to your phone suddenly zooming past wifi networks, when you stop traveling the phone will prompt you if you’d like to have the system enable DNDWD automatically in the future. If you agree, during future drives no notifications will come through until you get out of the car. If you attempt to use your device while in a car, it will ask if you are currently a passenger to disable DNDWD. When it is enabled, the user can have the system auto reply a specialized DNDWD message, and apparently the person texting you will be provided an option to declare that the information is important and to deliver anyway. Otherwise, all these alerts will simply filter in once the ride completes. The Home app is gaining the ability to recognize speakers and now supports multiple rooms. Airplay itself is getting upgraded to Airplay 2, which will also support these multi-room setups. In what should prove to be very handy, Airplay 2 now supports shared up next for Apple Music, allowing any user to drop the beat on an available wifi network. Otherwise, since Airplay is really a service hidden from the user, the hope is that Airplay 2 will bring the reliable experience we’ve come to expect from Apple devices using Airplay to third party accessories. AppleTVs will be upgraded to support Airplay 2 and there is a new 3rd party API as well. Hopefully this can take off. Apple Music now lets you follow your friends and allows you to customize your user page, effectively resurrecting Ping from the grave. This implementation makes far more sense, however, as you will simply see user’s profile picture on albums they listen to when browsing. While publicly shared playlists and multiple user playlists were my most desired features, and still haven’t made it to Apple Music, this will likely be far more useful in my day to day life. Public playlists give me no indication that the user shares my tastes making them user intensive for discovery, and multi-user playlists requires people to actively setup a shared playlist, this feature will simply allow me to be aware of which of my friends like which content is currently on my screen without any effort on either of our parts. This can greatly reduce option paralysis and should improve discoverability. Apple added MusicKit and developers can start integrating the service directly into their apps, such as DJ apps allowing the user to remix these songs, or games allowing the user to pick a custom soundtrack.
+ A New Store Front
As far as app updates go, Apple saved the best for last and announced that the App Store is undergoing it’s first ever ground up redesign. While things did get a facelift with iOS 7, the app still has had the same underlaying framework, with the same tabs and combination of vertical and side scrolling lists which have graced the app store since it launched 9 years ago. With iOS 11 the App Store opens to the Today panel which beautifully highlights a few apps each day. Each featured app gets a bunch of screen real estate, and if you tap on one of them a nice extended story pops up, which Apple has hired humans to write thoughtfully. For apps like games it will share the experience of playing the game along with screenshots, videos and press quotes. For productivity apps, it can include tips and tricks for getting the most from the functionality. This will make it useful to read these bios even for apps you already have purchased. Each day Apple will feature a new app, game and list, and about week’s worth of previous days will be displayed for when you’re unable to check for a few days. The App Store now houses over 2.2 million apps, and this change could not be more welcome. This design makes it incredibly appealing to swing by the store a few times a week and see what’s new. This should make it much, much easier for a developer to break into an existing space if they are bringing some clever new functionality to the table. Whereas things like top lists make it difficult for new entrants, these curated lists will be able to ensure high quality apps get the most attention. After the Today page, the app has been subdivided into games, apps, updates and search. Finally, games gets broken out into a dedicated section, because lets face it, games are incredibly popular, and you interact with these two types of content entirely separately. The games and apps pages each look pretty similar to what the featured page used to look like, with Apple featuring some specific titles and themes with some top charts down below. The content has now been formatted to basically all fit across the device if in landscape mode, and so pretty much all the content can be accessed simply by swiping vertically. There is still some horizontal swiping, particularly in in portrait, but it appears this interaction method will be rarer and that’s good news to my thumbs. Personally I think Apple might be better served moving towards showing trending rather than top lists, but these have been so de-emphasized by this design that breaking into these lists may no longer be the be-all-end-all it has grown to become for app profitability. Finally, individual product pages are also seeing a complete redesign, and developers have been given the freedom to be far more involved in the designing how their app is advertised. Professional and user reviews can be highlighted; game play videos and screen shots can more elegantly be wrapped into the experience; and in-app purchases can be highlighted and completed directly from the product page. Developers can now choose whether or not they want to reset their app’s user rating when they release an update, and can set timed releases or even phased releases to provide opportunity to test how major features roll out. Apple is promising to review apps even faster, with the stated-internal goal being to review apps within 2 hours of their submission. Auto-renewal is available and supports Apple Pay. In return for all this, they now must use the official API to request app ratings. Well that’s a whole lot of new stuff for one OS. And there’s so many other little things they glossed over, like a new GIF folder in photos, a revamped storage management screen, a one handed keyboard, an option to offload unused apps, and even bolder text. All the pain points are slowly being released, like Bluetooth no longer disconnecting when Airplane mode is enabled. Apple must be pretty proud of themselves, and busted out a nice demo to pat themselves on the back for their hard work. Then, in what came as a bit of a surprise, Craig came back out on stage for some more announcements.
+ iOS 11: Digging Into Kit
Metal 2 is making its way to iOS and brings similar advantage as on the Mac, minus things like eGPU support which wouldn’t make sense. In some ways, iOS flavor of Metal 1 was a bit more feature rich, and so in Metal 2, macOS is largely playing catchup. I’ve found list of a couple of the highly technical things added such as array of samplers, dual-source blending, indirect argument buffers, programable sample positions and uniform type. The bigger thing here is that iOS is such a gigantic platform which will drastically increase the number of developers gaining experience working with Metal. By bringing iOS and macOS to greater parity in Metal 2, Apple is helping ensure Metal gets adopted for the Mac despite how entrenched DirectX is to the market. Next up is a new framework which brings the latest buzzword, Machine Learning, natively to the platform with Core ML. Apple spent very little time talking about this on stage, but the concept is incredibly fascinating. Machine learning basically works by creating “training models” which Apple’s Developer’s website defines as:
A trained model is the result of applying a machine learning algorithm to a set of training data. The model makes predictions based on new input data. For example, a model that's been trained on a region's historical house prices may be able to predict a house's price when given the number of bedrooms and bathrooms.
Apple’s implementation seems to be that it will support all the most popular machine learning models currently available and will automatically apply them to work with your app on Apple’s devices. It’s a little hard to get an idea of what this looks like, but developer Otto Schnorr of DeepDojo compares the solution Apple is offering to the process of creating a .PDF today. You write your content, you pick a machine learning model which fits your content type and drag the model onto your project. Core ML automatically generates the code required to use the machine learning model on all of Apple’s various devices taking full advantage of the latest hardware and software optimizations available. I am absolutely not the person to test how well this works in practice, or to whip together some form of real world example of this in use. As far as I can tell, however, this commoditizes the advantages of machine learning in a way no one else has so far attempted. To date, if a company wants to apply the advantages of machine learning to their product, they have to hire the talent to do so in house. Google is offering the ability to rent hardware specialized to run that code, but you still need to understand how to connect your code to a neural network. IBM is letting developers take advantage of Watson to quickly do specific searches or build a chat bot, but Apple’s implementation seems much more similar to the commercials suggest for Watson than how it functions in reality. If it’s true that a developer can create a note taking app that supports the Apple Pencil, and then drop a handwriting recognition model on top of it allowing them to use machine learning to improve the handwriting accuracy and conversion to text synthesis over time without the developer needing to have any understanding of anything other than how to make a notes app that supports a stylus, well the possibilities are endless. Next up for the machine learning section is the new API ARKit. As I mentioned far too long ago, Apple’s competitors have been showing off all sorts of cool demos in keynotes highlighting the future of AR. Craig pulled out one hell of a subtweet, managing to fire shots that clearly were directed at someone, but its anyone’s guess as to if it was Facebook, Snapchat, Magic Leap, Microsoft or any of the other mixed reality companies. In comparison to all of the highly curated concept videos everyone has been showing us all year, Apple used the same device many of us had in our pockets and started showing similar demos, live, and suddenly a gaping hole appeared in the earth from where Apple’s mic dropped. The impressions in my twitter were pretty hilarious. My feed has a lot of people who have spent a lot of time discussing how valuable these demos from other companies have been and how transformative products like the magic leap were going to be. Every single one of them was immediately agreeing that a new standard had been set, and in the matter of minutes Apple had become the best game in town. Whereas most developer tools Apple release at WWDC require months before we start seeing cool applications of their use outside of the prepared demos by Apple, a quick YouTube search for ARKit will show this situation is drastically different. I have been finding just all sorts of videos everywhere of developers immediately putting these tools to use after installing the iOS 11 beta on their devices. Yes, developers at the conference, likely some of the most social five days in a developer’s life, were able to simply load up and get going showing off cool implementations. Effectively, Apple’s API provides all the necessary tools for anyone to immediately start designing mixed reality apps. Combining Visual Inertial Odometry (VIO) with the Core Motion data from the camera allow ARKit to do all sorts of visual wizardry without any calibration by the developer. Surfaces are automatically detected and objets dropped in the scene simply react to the surfaces naturally. If you put a coffee cup in the world it will simply rest on the first flat surface it finds, like a table. The objects you drop in the scene will interact with one another, so if you add a lamp the lighting effects will process through the smoke rising off a virtual cup of coffee. All of this renders live, and as the user moves about they will be able to see the visual from every angle imaginable. ARKit provides motion tracking for the objects in a scene, estimating planes, scale and ambient light. Thus your virtual objects don’t look out of place and immediately look like natural objects that exist in the real world. All of this is built with support for the Unity and Unreal engines as well as Apple’s own options like SceneKit and SpriteKit. Apple even is creating templates for specific types of apps, which helped spur all these demos we’ve seen already. With Apple’s keynote demo blowing minds, Apple announced that all A9 devices and up are supported. In a world where companies are just starting to announce their first AR related products, Apple suddenly has an available market of 100s of millions of devices. While getting specific numbers on the number of A9 and up devices sold is difficult, its well over half a billion and will likely be approaching a billion by the end of the year. Even last year’s SE model is suddenly an AR capable device. In a matter of minutes Apple had destroyed whole industries and created a ton of new ones. Yes, iOS 11 by all accounts is quite a monstrous release. While there may be fewer immediately obvious changes than some years, there’s still plenty of them with things like the home screen/notification merger. Apple showcased so many things, many of which would’ve used to be huge talking points now just get stated as facts such as a QR code reader in the default camera app. They didn’t even mention tons of features, some of which managed to at least get shown for a brief moment as the background of a slide like dynamic quote support and simple Wi-Fi password sharing, others will simply be left for us to slowly discover. All that said, the underlaying technologies like APFS, CoreML and ARKit seem to be the real show stoppers. While APFS and CoreML advantages will be useful to users but not obvious that they are enabled by iOS 11, ARKit ensures that the public will get plenty of immediate eye candy to show off the power of the newest OS. Well, after another quality 45 minutes, it looks like its finally time to move on to topic number 5.
+ iPad: Making the World’s Most Popular Computer
When the iPad was introduced in 2010 it was an odd dichotomy of exactly what you’d expect from stretching out an iPhone’s while still being sort of mind-blowing in person as a giant handheld display. Compared to the laptops of the time it felt amazingly thin, and to have that much display with no fans was a whole new experience. iPad ran for almost a full year with iOS 3.2.x, which if you’ll recall from the iOS section was pretty feature light. Imagine an iPad with no notifications, customizable wallpapers, iMessage, or iCloud. Practically, owning an iPad without a computer was a bad idea as you would be unable to backup the device and getting content on to it could be an ordeal considering the state of the web back then. The iPad 2 was announced a year later and shaved off a bunch of aluminum, making it seem, at the time, akin to a sheet of paper. While it looks bulky today, it is far and away the most popular computer of all time (defining smartphones as a separate category), and set the stage for the platform to have the most ridiculous standard for YoY unit sales imaginable. When the iPhone 4 came out with it’s retina screen shortly after the iPads debut, it was immediately obvious to everyone what killer new feature they couldn’t wait to see in the device. Bringing retina to the iPad proved to be a bit of an ordeal, and so when the iPad 3 shipped the next year it was forced to commit the deadliest of sins for an Apple product, and gained some thickness and a noticeable amount of weight. Apple, in some combination of frustration over compromising the form factor and a desire to ubiquitize lightning, decided to update to the line again in the fall of the same calendar year when it was releasing the iPad mini and the iPad 4 returned to the size of an iPad 2 but with the retina display. Apple moved forward with this fall release schedule for iPads for the next several years. The iPad mini 2 was a true engineering marvel, and along with the iPad Air in many ways felt like the size and weight the device was always meant to be at and started to call into question if it would even be desirable for it to get much thinner unless it was also somehow flexible, i.e. a long way off. We won’t know for sure until Apple actually makes a thinner iPad, but 4 years later all the 9.7” iPads and minis are almost the exact same dimensions. Apple divided the lineup further in 2015 with the release of the iPad Pro, which came in two sizes first a new, giant 12.9” and later in the familiar 9.7”. Functionally these iPads felt like a whole new category, particularly the larger version which sports more pixels than a 15” retina MacBook Pro and can run two full fledged iPad apps side by side in landscape mode. Capacities came up to 256 GB and everything about the machine felt surprisingly powerful; the color accuracy of the display meets professional standards and in the smaller True Tone 9.7” version adaptively adjust for the ambient light so colors always look the same no matter the surroundings; the larger 12.9” supported USB 3 transfer speeds, and even the speakers sounded more like a cheap wireless Bluetooth speaker than a laptop. iOS had come along way during this time, and with new features like the aforementioned split screen it was starting to become a laptop replacement for those who previously needed one professionally instead of just for people who didn’t need a computer and were never too comfortable with them to begin with. The iPads Pro also brought support for physically connecting to keyboards as well as the Apple Pencil, in many ways the first consumer stylus experience which approached the feeling of picking up a Pencil and writing. The iPads Pro also reached a performance point where they compared favorably with laptops. Despite being a fraction of the weight of the 15” MacBook Pro, iPad Pro is fully prepared to display 3 simultaneous streams of a 4k video on it’s display, and a maxed out 2015 MacBook Pro isn’t able to keep up. Apple’s internal silicon teams keep doing such phenomenal work, that in so many ways, iPad Pro felt like devices far too powerful for the software they were running, and so Apple decided to forgo upgrading them in the fall and wait until the spring to make any changes to the line. So what do you do to upgrade a device which shouldn’t get any thinner; is too powerful for the software it powers; already displays colors about as well as humans can perceive and adjusts to it’s surroundings; and supports the worlds finest stylus experience? Well, the first step was to drop the cost of the non-Pro lineup for iPads. Earlier this spring Apple announced that the 9.7” iPad would drop the Air moniker and shaved $70 off the starting price bringing it to $329. Bringing a slightly under clocked version of the same processor in the iPad Pro, this seems like a terrific price point to encourage people who either haven’t gotten an iPad yet, or more likely, just keep hanging onto whatever iPad they first bought and hasn’t bit the dust. With the stage set, WWDC is a pro event, and what better place to announce updates to the pro line?
+ iPad Pro: Making the Best Better
The first thing new about the smaller iPad Pro is it’s name as the screen has been bumped up ever so slightly to 10.5”. Display technology has gotten to the point where bezels are starting to disappear, and so Apple has decided to cram just a little bit more iPad in the familiar form factor. With it’s smaller size, its relatively easy to hold in portrait without a firm grip on the side and the bezels are still there in landscape mode when you need them. To my mind, this is the reason why the larger iPad Pro didn’t get a similar treatment as the bezels are necessary in manipulating a device that large. But why go through all the trouble of making it slightly bigger? As Apple pointed out on stage, the new size makes it just big enough to fit a standard size keyboard both in software and connected externally. This small difference has a big impact. I have typed A LOT on the 9.7” software keyboard (and coming from the guy who wrote this, you should trust I’m not overstating things), and while I can type almost as fast (though with more errors) on it as I can on my MacBook, I’ve always had to do so using a bizarre 8 finger setup , using mostly my thumbs, index fingers and middle fingers, rarely my ring fingers, leaving my pinkies to just hang out in space. This new size alleviates the issues of cramped hands. True tone displays are now standard across the lineup, and the wide color gamut that debuted on the 5k iMac is coming as well. The displays can achieve a new level of ulra-low reflectivity, and are pushing out at an incredible 600 nits, so they should be quite visible even in direct sunlight. All of these features in combination actually make this the first Apple display to official support HDR video. The big news, however, is the new display can refresh at up to 120Hz per second. Apple is dubbing the new technology ProMotion, and it is an experience you have to see to believe. Computers have settled into displaying content at 60 frames per second for years now. Some TVs will display at 120 Hz, and movies often play at 24 Hz, while games tend to run at 60 Hz unless due to power constraints they’re forced down to 30 Hz. For a device like the iPad, pushing the display up to a 120 Hz is going to have dramatic impacts on battery performance, and so Apple has developed an incredible solution where content is updated at the rate necessary for the experience. Thus, when you’re scrolling through webpages the display will update 120 times a second to provide an almost surreal experience. Yet if you’re sitting still and reading the display will drop down to 24Hz , dramatically improving power savings over the old 60 Hz standard. When watching a video full screen it will go to 48 Hz to best accommodate most video content. Finally, if you are multitasking watching a video and taking notes, the display will update at 60 Hz until the moment your Pencil makes contact with the display when it will temporarily bump up to 120 Hz until it breaks contact again. The result is an absolutely seamless experience, and everyone who is touching them is basically losing their minds over how fluid everything feels. Similar to switching a video game from 30 FPS to 60 FPS, it is immediately obvious when you first see it, and quickly becomes your default to the point that going back to other devices feels noticeably choppy. The human brain really is remarkable. All of this performance allows the Apple Pencil to reclaim the title for least latency, dropping down to 20 ms when Microsoft had just announced 21 ms. Using their newly public machine learning chops, Apple is now predicting where the Pencil will most likely go next, and is pre-buffering the line in that direction so that it can display immediately if it does follow the expected path. For those who have touched it, the experience is that much closer to reality. At this point, the only real hurdle is the coefficient of drag still feels more like writing on glass than a physical object. How that issue ever gets addressed is anyone’s guess at this point. Apple’s silicon team isn’t interested in taking breaks, and so Apple is releasing the A10X Fusion to power these dynamos. A 6-core processor, it adds one additional performance core and efficiency core from the standard A10; while the GPU is an absolute monster, now up to 12 cores. In practice this results in a 30% faster CPU and a 40% faster GPU than the previous iPad Pro. In the span of 7 years, the iPad is now 500x more powerful than the original and I am unaware of any product line in history approaching half of that sort of performance delta in a similar time frame. Even the iPhone 7 is only 240x more powerful than the original iPhone that came out 10 years prior. As people are starting to point out, this puts Apple in a fairly awkward place on stage, Apple’s iPad is absolutely trashing Intel’s chips in performance. Apple can’t really talk about the iPad’s performance without making the MacBook look underpowered and calling out Intel, a partner they are clearly heavily invested in, and so Apple is effectively ignoring their most remarkable achievement on stage. While I’d love to see Apple’s chips powering macOS, they have been reticent to make such a bold move. Intel recently made some strong comments hinting that they will protect their IP around the x86 standard if they feel it is encroached, and while the best guesses are this was meant for Microsoft and their new ARM Surface devices, it could be a possible explanation as to why Apple hasn’t pushed forward with their chips for macOS and may not be planning to explore the market. Apple may not want to make a big deal out of the iPad’s ridiculous performance, but Affinity Photo had no qualms about it during their demo. Briefly mentioning that their new iPad app is outperforming their Mac app running on Intel’s latest quad-core i7s, he went on to show the iPad busting through blend modes as if he was merely adjusting the saturation. In the span of about 2 minutes he built a movie poster for a nonexistent Pirates of the Caribbean movie, complete with adjusting the lighting source and angle simply by setting the Apple Pencil down and tilting it’s angle of attack. Yes, the iPad was able to flip through the entire apps list of blend modes in real time, never showing any buffering or loading sequences. Again, all of this was being displayed live, on his iPad, in 120Hz, but also being projected for the audience. While Apple may not be keen to talking about the performance, they certainly aren’t letting it slow them down. Rounding up the update, the iPad is finally getting the same quality cameras as the iPhone. I know people like to make fun of the iPad as a camera concept, but it’s a good one with the world’s best viewfinder. While I’d rather take a photo with my iPhone, if I were shooting an extended video, the iPad would be my preferred device. The iPhone has started showing up in more and more professional photoshoots, and I wouldn’t be surprised if eventually the iPad started becoming popular for video recording, especially as these cameras continue to improve. Imagine being able to record a movie where you could actually watch a preliminary version of the CGI effects being rendered live on the green screen behind the actors as you filmed. While there’s certainly some workflow issues making this feasible, the iPad is clearly powerful enough today to provide the experience. USB 3 speeds are now supported by both form factors and capacities are now offered in 64, 256 and 512GB. Apple also has a new line of cases to support the devices and is bringing back leather options. Finally, they’re making a new sleeve with a Pencil slot, designed to fit the iPad and a keyboard case as well. These iPads are now for sale and reviews have been coming out from everyone who was lucky enough to get a test unit. If you have been reading this review, be sure to check out Federico Viticci’s iPad Pro review, his passion for the product is palpable. While people have been lukewarm on advising iPad upgrades for some time, the journalists seem particularly impressed by the experience provided by ProMotion. While I already have the original iPad Pro 12.9”, I am strongly considering selling it to get myself one. Yet the iPad I have is already so overpowered, what on earth is anyone going to do with all those CPU cycles on the new devices?
+ iOS 11 for iPad: A Masterful Pro’s Delight!
iOS 11 got 45 minutes of stage time, more than macOS and watchOS combined, and yet there’s far more to it’s story than had been shared so far. First up, the dock. As anyone who has seen the 12.9” iPad knows, the restriction of 6 apps on the dock looked pretty ridiculous on the giant screen. iOS 11 for iPad now gets a macOS style dock, and even includes a little predictive area on the right side, functionally similar to continuity but with more predictions and visually similar to how apps and files/folders are separated on the dock of a Mac. This is a huge performance upgrade as it allows the user to quickly switch between their favorite apps, currently up to 16 total (13 user selected and 3 predictive). The predictive options ensure that the most recent things you’ve been using show up even if they are not part of your normal workflow. This also provides a new way to initiate a multi-tasking session. Now when you are in an app, you can swipe up lightly from the bottom (a new gesture which brings up the dock) and then tap and hold on one of the apps in your dock and you can drag it up enabling the user to select which multitasking layout they want. You can still swipe over from the right edge but now it first brings up an app in a floating app window. You also can drag and even hide this floating window to either side of the display. This allows the user to have 3 apps running simultaneously and still have a 4k video playing in picture in picture if they all support the various view controllers. The interaction methods to enable all these views can be awkward at the moment and might need some work, its hard to know since its all in beta, but once you get into them, the power is incredible. Running notes and Safari side by side and being able to pull over messages feels fantastic. Apple is also supporting side by side multi-tasking in almost all of their own apps, a few holdouts remain such as Settings, and hopefully soon it becomes expected of basically everything except for some types of games. I mentioned that the gesture for bringing up the dock involved pulling up lightly from the bottom of the screen, and this is because if you extend that same drag motion up, you enable iOS 11’s new app switcher view, which visually looks a whole lot like Mission Control on the Mac except with Control Center available as well. Any pairings you’ve made between apps such as the aforementioned Safari and Notes are maintained when in this view. This could theoretically prove a huge boost to productivity, as you could have multiple apps paired and quickly switch between them. All of this is small potatoes compared to the next demo, multi-finger drag and drop. People have been begging for drag and drop on iOS for years, but iOS 10 brought this feature request front and center with the side by side app multitasking. Most people seemed to be hoping Apple would allow them to drag something from one window to the other, and it seemed like a reasonable implementation of cross app file sharing. Apple decided to go much deeper and the results are pretty exciting. Drag and drop is initiated by long pressing on almost any content, once in the drag mode you can tap on as many other items of the same content types as they’d like to add them to the drag. The OS remains virtually fully usable, however, and so you can also swipe around, access content from multiple apps or various app views. You can always long press to select another type of content, and can hold multiple stacks of different types of content with your various fingers. So you could start selecting a few web links, then select a few pictures, and finally grab a mapping destination, head to Mail and drop them all in an email. You could even grab some text from a webpage with one finger, transfer it to your other hand, grab a location from the webpage, switch to maps and drop the location in maps, pull up multi-tasking, open reminders, and drop the text from the webpage in reminders all in one drag. Quite simply, this is vastly more powerful than what drag and drop looks like on the Mac. Not only does this improve efficiency, its also far more engaging and enjoyable. Using two hands at once with multitouch is one of the most empowering experiences I’ve ever had on a computer, it simply feels good, similar to playing an instrument if you’ve learned one. This type of experience also highlights a potential future of macOS as the TouchBar slowly expands its roll. The next big addition is a new app, Files, which is Apple’s long awaited Finder implementation for iOS. The app does not come on the system by default, maintaining Apple’s desire to prevent users from being forced into thinking about a file system. If the user wants, however, the functionality is now there, and it looks like a rather elegant solution to how users save files in the modern era. The app allows you to access any content saved on iCloud Drive, the local iPad storage, recently deleted as well as third party cloud storage options such as Dropbox, Box, Google Drive and One Drive. It supports nested folders and tags, provides both list and icon views, and can be organized by name, date, size or tag. Files show the iCloud download icon if they are not locally stored on the device allowing you to quickly cache them if necessary. There is a persistent sidebar similar to on the Mac including the space for favorite files and folders and a space to view all recent content. If you have a keyboard attached, the finder commands you’ve mastered will work just as you’d expect. Finally, even the Pencil is receiving some new interaction methods. Tapping on the lock screen with the Pencil can unlock the device straight to notes, allowing the user to immediately start jotting down thoughts. Notes now recognize handwriting, and so you can search through all your handwritten notes using spotlight. Notes has been blessed with a new lined paper option, with choices for how the lines are spaced. These options immediately open up the Pencil to a whole new class of users, such as writers and engineers, who would want to use the iPad as a place to take quick notes or simple sketches but aren’t trained artists who will sit down and make something beautiful. My handwriting has always been rather inconsistent, and years of disuse have atrophied any talents I had accumulated, yet even in the first beta the system is remarkably good at recognizing the most important words from my notes and allowing me to search them. Interestingly, when you make a search, it will show the words that the system is pretty positive it knows correctly in the preview of the note, but it manages to find notes for which I am searching when the word is not showing up as recognized in the preview. Apple seems to be doing some classic underpromise, overdeliver, and I expect we’ll see this functionality fleshed out rapidly as the machines do their learning. Apple wrapped up with a nice quick demo and then this well focused ad. I feel these ads are particularly valuable for an observer of Apple as they highlight how focused Apple’s narrative is for the product, if they have a good story to tell, then users can be clear how the new technology can fit in their lives. At times the iPad’s story has felt rather convoluted, but it seems like the community at large feels that Apple is on the right track now. Bifurcating the lineup into standard and pro and having a nice $300 gap in price points makes it much easier to pitch the iPad as either a nice consumption device or an incredibly powerful pro level device for specific tasks. Features like the new multitasking modes, multitouch drag and drop and files go a long way towards increasing the number of people for which this iPad can be a true laptop replacement. Many of the reviews are focusing on this angle, and while there is value to it, it’s far too narrow a view to use to address the full picture of the iPad. Ranging from $300-$1,000, these devices can be a primary computer for some people, but for many many more are best served as an additional tool in the belt. I have a MacBook and an iPad Pro, and while there’s a lot of overlap in potential, I use them both frequently for entirely different purposes. I could spend $2,500 and get a MacBook Pro which approximated the gains of each individual device, but for the same amount of money if not a little less I have in my opinion the worlds best device for writing in the Macbook and the worlds best portable entertainment center in the iPad. I have the iPad Pro as it also doubles as an amazing tool for me as a musician, drastically opening up my potential creativity from simply using Logic on my Mac. People seem to want to define the iPad’s success around how much it replaces the Mac. Even though Apple continues to sell twice as many of them as Macs, the fact that the iPad 2 threw off the YoY sales results so drastically has resulted in many analysts viewing the iPad as a dying market or flash in the pan. Apple clearly sees this story much different, and now with these new price points and feature sets its looking like they are finally starting to deliver on the promise of how “excited they’ve been for the future of iPad”.
+ Recap: Future Computer
With iOS 11, it really feels like Apple is starting to nail the experience of hiding abstraction from the moment you startup while still providing advanced functionality. I talked about this briefly in my podcast, but what Apple is achieving with permissions during drag and drop is astounding, and all of it is done implicitly without a single dialogue box or any additional cognitive load for the user. It’ll be interesting which features Apple decides can end up on the phone. While announced with iPad, Files works on iPhone; but for now at least you can’t use drag and drop on the iPhone. Apple is clearly testing this functionality out, and even allows it when the user is reorganizing their home screen. If you take anything away from the first 5 points of the keynote, it seems like it should be that Apple has 6 major product categories so far: TV, Watch, desktops, laptops, iPhones and iPads, and Apple has long term plans for all of them. We’ll just have to wait and see what those plans entail. Overall, iOS is clearly in an incredibly strong position. Recently I was teaching some children how to code when I asked a child to pick something and write instructions for how to draw it. He decided on a computer, and his instructions were to draw a rectangle with a thick border and a circle in the border centered at the bottom. People over 20 may struggle with this concept, but this is how the future views the iPad. With technologies like APFS, Metal 2, ARKit and new features like multitouch drag and drop, files and Mission Control, Apple is the only manufacturer who is actually delivering on the promise of smartphones and tablets as true computers. With performance now surpassing desktops, Apple’s decision to go all in on a mobile OS seems like a better and better bet versus Microsoft and Windows everywhere.
+ Personal Assistants: Aids for Everyone!
In 2015 Amazon introduced the world to Alexa with the Echo and brought to the mainstream devices with only a voice controlled interface. Featuring an unprecedented array of 7 microphones, Amazon had the first home device which was prepared to hear your questions from anywhere in the room. Initially pretty feature light, Amazon has been steadily adding skills to their little virtual assistant, and Alexa has started to gain significant mindshare, to the point that in some circles the name Alexa is replacing Siri in the lexicon. Recently, Google has transferred its focus from Google Now to Google Assistant and has released a competitor, the Google Home. Consensus largely seems to be that the Google Home is the more effective of the two assistants. Amazon, however, has been very aggressive about releasing various form factors of Alexa at some nice price points, and is even now testing the waters of adding a camera to encourage adoption in more places. Having introduced the masses to digital assistants, there has been rampant speculation that Apple plans to release it’s own conception of a voice driven device. In some ways, Apple already shipped their first version of this device this past winter, although they are still fairly inventory constrained, the AirPods. If you do not own AirPods, you may not see how the devices are related to the Echo or Google Home at all. The AirPods are arguably the best available Bluetooth headphones, and the story behind AirPods is that they are the best user experience possible for listening to music from Bluetooth devices, particularly iCloud connected devices. For the lucky ones who have AirPods, the customer satisfaction rates are through the roof. It is not uncommon to hear those in the Apple blogging community state the AirPods are their favorite new product from Apple in years. I absolutely love mine, and the experience of having no wires is absolutely more valuable than I ever could have imagined. One of the surprising things for some AirPod owners is how they become a trojan horse for using Siri. Some users only use AirPods as glorified earbuds, but among other users, the ability to have Siri conveniently in your ear at all times is a unforeseen use case for the devices. Whether or not you use Siri for much more than making calls and picking songs with the AirPods, they are Apple’s first take on a device whose most intricate interface is voice. Regardless, Siri is almost a nonexistent part of the pitch for AirPods, users come for the music experience and everything else is gravy.
+ HomePod: “My, What a Fine Horse!”
Apple is pitching the HomePod in the exact same way. Completely sidestepping any comparisons between Siri and Alexa/Home; Apple is marketing the device as a replacement for home audio systems, a space currently dominated by Sonos. Apple’s new device features an array of 6 microphones for listening to user commands, one for equalizing low frequencies, and adds 7 tweeters and a 4” woofer.
If you have heard any of Apple’s recent devices internal speakers, you know they have been packing some impressive sound into these tiny devices in the past couple years. Since the Beats acquisition, Apple has been dramatically improving the sound across all their lineup, but this is the first time these sound engineers have had an opportunity to design a dedicated speaker, with 7” of space to work with.
Wrapped in a nice mesh enclosure that comes in white and black, this little guy should be relatively easy to squeeze almost anywhere in your house. Each of the the 7 beam-forming acoustic horns have their own amplifier and directional control allowing the device to dynamically adjust the sound to the space it is placed in. The 4” high excursion) woofer faces upwards powered by a custom made amplifier, and there is a low frequency mic built to automatically equalize the bass and ensure it doesn’t push too hard and start to distort.
When you’re Apple and you need something to control all these microphones and speakers the easiest option is to just toss one of your previous generation chips in there and so the A8 graces the system. Apple points out this is the smartest speaker ever, and that’s quite an understatement. While not quite powerful enough to drive AR, the A8 in the iPhone 6 is almost as powerful as any flagship smartphone outside of the iPhone. It’s comparable to a MacBook and way more powerful than the white plastic MacBook if you remember those, and effectively its sole purpose is to listen to sound.
With all of this processing power, the speaker can do some incredible automatic equalization. By listening to the sound it is producing it is able to determine how to best direct the acoustic horns to fill your room. As you introduce more HomePods to the room, they will recognize each other, and adjust accordingly. The processor is so powerful HomePods are able to perform all of this in real time, adjusting the sound as you move the device around and cancelling any potential echo caused by multiple speakers.
Apple is going all out with their audio processing, and as the device detects it’s location in space, it will equalize each horn to provide the listener audio queues our brain will process as stereo sound. Apple demoed the concept of the device splitting out the various elements of a tracks production, things such as vocals and leads, the other primary instruments or “direct energy”, and the various ambient sounds such as backing vocals/instruments, reverb and other spatial and room effects which producers use to spice up our music. Traditionally you place various speakers about your house which isolate these elements, and Apple is using all of the processing power to provide a similar experience in a vastly more convenient form factor.
HomePods can pair via Bluetooth a la the AirPods and past that everything else is automatic. Taking advantage of AirPlay 2 and it’s new multi-room support, these devices should just appear as available speakers\rooms from any device, and quickly and easily switch between playing to one room or another or to your AirPods.
For anyone who owns or has setup a Sonos or competing wireless speaker system, this sounds downright magical. For those who have tried to take advantage of any form of room calibration technology, the experience is pure frustration, involving loud tones blasting for minimal apparent effect. The reality is that each song is tuned differently, and the acoustic properties of a room are changing moment to moment. None of this stuff is a huge deal, but if you’re going to try and create a custom equalization for a space, ignoring all of these factors is a death by thousand cuts scenario. By doing all of this in real time without any change in the user experience, Apple is going to be able to deliver the best possible sound at any given moment. We’ll have to see how this plays out in practice, but the reports are promising.
Apple is always cautious about sharing device specifications, and with HomePods not shipping until December there’s obviously no geeky details available like frequency range or impedance, and who knows if they’ll ever publicly state those numbers. That said, Apple has allowed a number of people to listen to them after the keynote, and the impressions have been incredibly positive. Apple apparently brought in an Echo and a Sonos Play 3 for comparison, and unsurprisingly the HomePods completely outclassed the Echo and from all accounts did as well if not significantly better than the Play 3 across various genres.
Speakers and microphones are actually some of the most technically simple concepts imaginable. As a child you probably played around with two cups attached with some string at some point, and you had absolutely built a custom speaker\microphone solution when you did so. Sure, your’s had the fine acoustic qualities of a red plastic cup and the total harmonic distortion of some thread, but it worked, and unless you broke a cup or cut the string, was going to keep working. Speaker companies have spent the past century or so playing with the harmonic properties of various materials and perfecting the angling and design of the horns (red plastic cups). An important way to improve the sound without needing more space is via the amplifier, which in its simplest form is a magnet with a wire running around it. The heavier the magnet, the more it will be able to vibrate the woofer and push more air making it easier for you to hear. In many ways you can approximate the depth of sound available from a speaker by the weight of it’s magnet. The Play 3 is a much larger device, with a volume of ~300 cubic inches compared to ~200 cubic inches for the HomePod, and yet both weigh about the same (5.6 lbs for the Play 3 and 5.5 for the HomePod). There’s certainly a lot of other variables, and the Play 3 has three significant amps; an insignificant one for its tweeter; 2 for its mid range drivers; and a separate bass radiator which just a fancy amp. Apple seems to be putting its weight into the one amp for its woofer, as there’s no reason for the various other tweeters and their amps to weigh much. Personal experience tells me you’re better off with one big magnet, but all of this is conjecture. Apple is pricing these babies at $350, and with the Play 3s running $300, it’s already a great buy based purely on its abilities as a wireless speaker alone. Wireless home speakers are a burgeoning market, but home speaker systems in general are a huge industry. Almost everyone ends up owning a hifi system, and a huge portion of people have purchased more elaborate surround sound setups. The biggest thing impeding adoption of these systems, which generally cost between $100-$2,000 and basically start at $300 for decent audio, are all the joys associated with wires. Setup is frustrating, but the whole aesthetic is ungainly, as unless you specifically build them into the structural design of your house, you have to come up with some elaborate method to either hide the wires, or live with them. If you decide to go with wireless options, you’ll likely be greatly sacrificing sound, and adding a whole new frustration of pairing and getting the content you want to stream to them. For a huge number of users, Sonos only becomes a practical wireless solution once you add the cost of an airport express or previous generation TV enabling AirPlay functionality. For all of this to work out of the box is a huge win and makes the available market huge for this device. Oh, and by the way, it’s also a smart speaker. Yes, Apple has basically perfected it’s trojan horse play, creating a scenario where in no way do they need to justify the personal assistant’s existence. Of course Siri and Apple Music would be integrated into Apple’s speaker. Thus, you have access to your entire iTunes library, and if you subscribe to Apple Music, have immediate access to pretty much the entire iTunes Music Store. “Hey Siri” is obviously present, and Apple is going the extra mile here, creating an anonymous Siri ID and not connecting your searches with your Apple ID. Siri will be gaining a bunch of new music specific abilities, and should hopefully be able to tell you the name of the various musicians playing and other factoids. You’ll also be able to do standard voice assistant tasks around setting timers and getting answers as well as be able to interact with any HomeKit enabled devices. The beauty of all of this is that it’s really not that important how well Siri does all these tasks at launch. Sure, I hope she’s ace and is an immediate competitor to Alexa or Google Assistant. While we’re at it, Apple’s PR team has played pretty coy that this is just a brief demo and that there is much more to this story to tell when the time is right, so maybe these years without a significant update to Siri will come to end, who knows? In the big picture, it really doesn't matter, like the AirPods adoption of these HomePods will be significant from day 1 simply based on their audio capabilities. Since Siri is all software, these updates could come at any point in time down the road, and until Siri can’t run on an A8, all of those features will work on this speaker. By ensuring that the audio experience is top notch right off the bat, Apple has bought themselves time to flesh out the voice assistant experience. Speakers tend to last, and there is a decent chance this will be the longest supported Apple device of all time, maybe not in terms of how long you could get one replaced through AppleCare, but it will probably be possible to use this guy daily 15 years from now in a way that’s simply not been true of anything Apple has ever made before. Speakers also only increase in value as you get more, particularly if they don’t require much in the way of setup. Thus Apple will be able to sell far more than 1 of these to people over time, but the previous ones will not need to be immediately recycled like an old phone. With all of the real time calibrations and judging by what we’ve seen so far from Apple’s acoustic engineers, when they ship, these are almost assuredly going to be the best speakers you’ll be able to buy for under $1k (and arguably much higher up the price chain) for everything except for gut punching bass. Since all of the dynamic acoustic calculations of these speakers occur on board via microphone, there’s no reason to think they won't adapt very well to other speakers, so for now it’ll be easy for audiophiles to buy a few HomePods and hook up a sub or two to their receiver and fill out the sound when desired. In the long run, it seems easier for Apple to “update” the HomePod by expanding to various form factors rather than somehow pitching new hardware advances. I hope to be able to buy a HomePod with the option for battery power someday. and hopefully they’ll offer some “pro” models which feature the 10”-15” of surface area which deep, powerful bass will always require. The HomePod does include a little touch display on the top, which so far has only been used to show a diffused version of the Siri waveform publicly, but in the hands on demos also presented solid + and - buttons. Neil Ceibert claims the entire top is actually a display, but I haven't heard anyone verify if that’s true. All in all, we clearly haven’t heard Apple’s full pitch for the HomePods, and Apple hasn’t actually finalized their functionality at this point judging by how they were demoed and the fact that they aren’t shipping for another half a year. That said, there is a whole lot to be excited about already. Having watched Apple’s acoustic performance skyrocket since the Beats acquisition, I was fully confident Apple would be delivering a premium audio speaker. I am impressed by how much they are doing with the A8 in these guys, but all of that is clearly Apple’s home court. As an AirPod owner the ease of pairing and multi-room support was no surprise either. The best news that came from the announcement was how well Apple was positioning the devices, and how good of a story they had prepared to share. As I mentioned with the iPads, this is always the clearest indicator to me that Apple has a hit on their hands. Pundits will certainly spend the next 6 months making the case that the HomePods are arriving too late to the game and will be too feature light to compete with the already entrenched Alexa and Google Home. First off, there’s really not that many of these devices in homes yet. Anecdotally, of the few people I know who do own one, many already own both. This highlights that these are mostly early adopters who are always ready to drop some cash on a new experience. For everyone else, however, even if you do already own a Google Home or Alexa, there is still every reason to buy a HomePod. Functionally, no one is going to rather listen to music from either of the other options. For people who own Sonos speakers, the incentives to make the switch to HomePods are high. Better audio quality and user experience are surely big selling points to people who were purchasing Sonos to begin with, and for something to come out which so heavily trumps each of those experiences while also providing a whole different set of functionality is a purchase plenty of Sonos owners will be happy to justify. For Apple these days the iPhone is basically what defines the available market for Apple’s other accessories. Sure there are a few Mac and iPad owners who don’t have the iPhone, and there is some value to an TV or the AirPods if you don’t use an iPhone, but for the most part these products are marketed to iPhone users. HomePods could appeal to a much broader potential set of users, especially if AirPlay 2 becomes popular as a wireless standard. All of these factors, combined with the fact that individuals will be more inclined to purchase multiple HomePods than Apple’s other accessories leads me to believe that the HomePod has the potential to become a huge success, possibly competitive with iPads launch growth. The fact that there is value to multiple units may help it avoid iPads fate of fast start, tough growth, as the devices prove too good to justify being replaced. Perhaps most interestingly, this gives a lot more clarity as to why Apple has continued to talk about their living room strategy when they get asked about the TV. Thinking about it from that angle, the experience of pairing one of these guy with an TV could easily become the premier living room experience. The biggest pain point of the current TV is that remote; its nice that its so powerful in most situations, but sometimes its too complex for when you’re watching TV an other times its getting lost in the cushions. HomePod seems to be Apple’s solution that Microsoft and Sony have tried to pitch with the Kinect and Eye, and again, it looks far better positioned to succeed at doing so. Long term, Siri makes a lot more sense if I can talk to the HomePod but see her response on my Watch and other personal devices, but this vision leads to the biggest question mark for the HomePod, how will it handle multiple users? Will Siri even attempt to do things like add appointments to calendars when it could need to somehow authenticate various Apple IDs, maybe somehow integrated with family sharing? Or is all this going to have to wait for future iterations? My gut tells me that fleshing out the details to that experience is likely what these next 6 months are about, and I’ll be excited to see what Apple shows us this holiday season.
+ Final Thoughts: Almost There
Apple held up their end of the bargain when they said they had a lot to share. As discussed so long ago at the start of this writeup, Apple went into the week with plenty of issues to address. While every summer brings with it new Apple Is Doomed angst, this year felt a little bit different as the extended hiatus of hardware updates for so many product lines combined with the number of new fields their competitors were starting to explore was causing Apple to feel a bit stale. It wasn’t like the company was going anywhere, but that was essentially the problem. Its now two weeks after the keynote as I finish writing up this behemoth, and its unquestionable that there has been an incredible shift in momentum. Gone are these concerns that Apple was going to let the Mac platform atrophy into a consumer only platform. Gone are the needs for people to be speccing out hackintoshes so they can continue in their respective fields. Each year Maps apes Google Maps most valuable additions. The new app, Files, makes iOS much more functional for many workflows. Metal 2 appears hot on DirectX’s tail, basically the Crown Jewels of the Windows OS. iMessage adding iCloud sync and Apple Pay peer to peer payments does a lot to improve it’s competitiveness with WhatsApp and Square. A bit more far fetched, but if the new app drawer improves discoverability combined with the native QR code reader in the default camera and opening up NFC to third parties, Apple may be laying the foundation to properly address the WeChat experience in China. Apple didn’t spend the week playing catch-up, however. For the most part, they stunned the world by setting new standards. The iMac was already the premier all-in-one, a title firmly solidified now with the updates to the graphics cards across the lineup. The iMac Pro is offering a whole new level of computing performance for an all-in-one that no one dreamed possible and there will be no competitors in the category for some time. The iPad Pro already had no competitors, and yet Apple is finding ways to make drastic improvements to the user experience while continuing it’s unprecedented performance curve. ARKit is years ahead of what we’ve seen offered previously, and overnight Apple has effectively killed off an entire industry of companies while simultaneously opening the door for far larger industries to move in. The implications of this are huge, and while it’s easy to get wrapped up in games, IKEA is already reminding us that everyday life is about to fundamentally change. With iOS 11, iPad appears to have rounded a huge corner, starting to capitalize on all of the hard work Apple has been doing with these A-series chips. The future looks bright with iPad defining it’s place in a professional’s workflow which is increasingly dissimilar to the needs answered by a laptop. Most people will prefer one or the other, and some people will find they prefer to own both. Bringing concepts like the Dock and Mission Control to iPad are great additions to the platform and will immediately open up new workflows. Yet, multi-touch drag and drop is where we really see Apple at its best. It took many years, but Apple took the time to completely rethink the concept of content sharing between apps in the modern era. Multi-touch drag and drop allows the user to be in complete control over their privacy without even being aware that its an issue, no dialogues boxes, just implicit permissions granted by where you choose to drop your content. As time stretches on, this will likely become a landmark feature for iOS devices and one of the strong points of the platform, similar to spaces or keyboard shortcuts on the Mac. Speaking of privacy and security, this was the first conference where it appeared to be an unadulterated advantage, with no indication that it was constraining the company in other ways. 2FA in the latest updates of macOS Sierra and iOS 10.3 is finally a clean, concise experience. With iOS 11, iCloud Keychain can now work with 3rd party apps, and so your device keychain can now autosuggest and autofill all your passwords. Hopefully developers will take advantage of this, because if so, Apple has just defeated the password. The first time using a new service, via app or website, the OS would simply create an inhuman and obscure password. Instantly, all your devices know the new password, and any new devices setup immediately learn these passwords. Each time using the app after that, simply authenticate with AppleID\fingerprint\other biometric data, and the password autofills. Over time, we can stop even bothering to have these passwords be static things, and take a page from the one-time authorization codes offered with Apple Pay and Siri. Yet, beyond all that, Apple no longer needs to worry about not being able to feed their machines enough data with which to learn about you, and they’ve figured out a way to do it where they themselves are learning nothing about you. Siri can now store and keep just as much data as Google or Amazon server side, but with anonymous Siri IDs and differential privacy, is able to do so in such a way that it is mathematically impossible to violate an individual’s privacy. In a world where governments are considering banning encryption, it is important that the data is inherently separated from the user and not simply hashed. Core ML is another example where Apple didn’t decide to play catchup, and instead decided to redefine the game. While its too earlier to know if they’ve succeeded, the ambition alone is inspiring. If everyday people are able to start gaining the advantages of machine learning in their apps, well, that’s another complete game changer that was just casually announced during a 140 minute presentation. HomePod seems like it has the opportunity to be Apple’s biggest seller since the iPad, possibly since the iPhone. In fact, while many people have pontificated about what product could ever sell as much as an iPhone, home speakers seem like a category that could approach those numbers. None of this is guaranteed, and I haven’t seen, touched, or most importantly, heard or spoke to one yet. I’m just saying purely as a product category, there’s a lot of potential for people to own a ton of speakers. As you may have noticed by the pace of Apple’s presentation and the backgrounds to a lot of slides, Apple had a lot more to share on Monday then they had time for. Apple actually held a platform state of union with a completely different set of content, and held tons of productive sessions. Things that you might think got lost, like Swift and Playgrounds are actually alive and well and had multiple iterations announced. In what would previously have been a huge portion of a WWDC, Xcode 9 got some gigantic updates including a new editor and build systems completely rebuilt in Swift. There is a native Markdown editor, cleaner display, native GitHub integration, 300 new diagnostics, and tons of interface cues taken from Swift. Developers can finally run multiple device simulators at once and can even do so wirelessly, no longer requiring a USB tether to a computer (the MacBook just got a lot more useful to a lot more developers). In a cute trick, the device simulator even allows you to emulate multitouch drag and drop. Apple later announced some significant improvements to the podcast experience, with new tools available for publishing such as seasons, trailers, and new content types such as extras and other extended content. The app interface has been improved with a Listen Now tab, a music-style now playing bar, and new Library and Browse tabs. Podcasters themselves will gain improved analytics including the ability to see when listeners drop off or skip content. Possibly most impressively, Apple did more than show off their new toys and set new standards. More than most years, Apple is clearly setting the stage for several future products. As I talked about with Cillian, Apple has basically shown us all the components of AR sunglasses (tiny batteries, tiny displays, AR capable processors at low wattages, and much much more) except for glass which can adjust its opacity. Apple has set the stage for a much more powerful TV and Watch accessories should be getting pretty exciting. All this optimism isn’t to say there isn’t some obvious low hanging fruit for the company to still address. iTunes is a mess, Siri has a lot to prove, and where on earth is Xcode for iPad? For all the love the App Store got, the Mac App Store is no fun. Yeah, and remember that whole 1st topic? Crickets.
+ Closing Ceremonies: Even the Recap Has Sections
I want to wrap this up by telling two separate stories which I think encapsulate the potential we saw today. The first was actually told for me, and I will simply pass along some of Jean-Louis Gassée’s incredibly thoughtful analysis:
“During the most recent Xmas quarter, Apple sold slightly fewer than 80 million iPhones, about 900,000 a day. Obligingly, a day has 86,400 seconds, so we round up to 90,000 to get a production yield of ten iPhones per second.” “But producing a phone isn’t instantaneous, it isn’t like the click of the shutter in a high-speed camera. Let’s assume that it takes about 15 minutes (rounded up to 1,000 seconds) to assemble a single iPhone. How many parallel production pipes need to accumulate ten phones a second? 1,000 divided by 1/10 equals…10,000! Ten thousand parallel pipes in order to output ten phones per second.”
Holy hell. That’s amazing, but why bring it up?
How did Apple grow from 5.3M Macs in 2006 to 212M iPhones last year, a 40X multiple? In one of his many Apple 2.0 strokes of genius, Steve Jobs hired an experienced supply chain executive, Tim Cook. To me personally, Tim is the fundamental reason why everything that got discussed today is so exciting. Tim is clearly a mastermind at inventory management and supply chains, and the biggest concern with him as CEO will always be his products start to run stale without a more visionary person in charge. Today’s conference gives strong indications Tim has positioned the right people below him to ensure that his perfectly oiled machine is always primed with desirable devices.
I believe the Beats acquisition crystalizes Apple’s potential at the moment. Apple’s second most prolific, and far and away most expensive acquisition was met with mostly surprise. Apple Is Always Doomed, so plenty of people took the opportunity to say it was a dumb way to spend money, but even those who respected the company could struggle to completely explain why Beats was so valuable. While forum commenters will indefinitely use it as an example of Apple’s failures, I think after this WWDC it is finally clear just how smart of an acquisition Beats was. First, Beats headphones themselves make a lot of money. Second, Apple Music is an incredibly vibrant service. 3rd, Apple clearly gained some fantastic audio engineers that have been delivering amazing sound across their lineup and their talents are now being given a proper platform with HomePod. Finally, and perhaps most controversially, it would appear the personnel have been hard at work not only lining up Apple Music exclusives, but getting Apple in a position to enter whole new markets. Apple seemed incredibly confident and relaxed on stage today. While a huge amount of content was announced, it was rare to see the Apple’s presenters press for time or speed up their speech. This was in marked contrast to the partners who joined Apple on stage. Even after the conference, Tim has seemed surprisingly open along with his general affability. Whereas most companies buy themselves into whole new industries, only to frequently be forced to buy themselves out shortly later. Apple buys some talent and incorporates it into their DNA and Tim Cook treats acquisitions like romanticized Native Americans treat natural resources, respecting each component and carefully maximizing the potential. It’s more popular to point to the Siri team’s exit as evidence of the opposite, but this seems to be the exception to the rule and the details likely misunderstood. As Apple’s silicon, audio and AR teams can attest, Apple is finding a lot of success with its acquisitions. This is why recent public purchases like Beddit and Workflow are so fascinating. Looking at the ground work that’s been laid out with Apple Files System, Metal, Core ML and ARKit and thinking about these amazing teams getting to work on top of these frameworks using the fancy new iMacs and eventual iMac Pros, all while finally housed in an inspiring yet comfortable workspace; I think it’s clear why everyone is so excited.
+ Appendix A
Newest Software Features
- iOS 11
- Apple Pay
- Available in nearly 50% of retailers
- Apple Pay Card
- Can accumulate money
- Can transfer to bank
- Can use at Apple Pay locations
- Peer to Peer Payments
- New natural voices
- New interface, accessible via text
- Follow Up Questions
- Multiple Results
- SiriKit Improvements
- Anonymous Siri ID
- On Device Learning and Predictions
- Applied across apps
- Keyboard predictions learn from content
- Control Center
- Notification Center
- Mall and Airport floor plans
- Speed Limit
- Lane Guidance
- Do Not Disturb While Driving
- Suggest automatically
- Can disable if passenger
- Auto reply to messages
- Can verify important and force through
- Apple Music
- App Store
- Metal 2
- New features
- Core ML
- Surface detection
- Objects dropped into scene
- Interactive effects with lighting and smoke
- Motion tracking
- Plane estimation
- Ambient light estimation
- Scale estimation
- Support for Unity, Unreal and SceneKit
- App Templates
- Demos, demos, demos, demos, demos, demos, demos, demos, demos, demos, demos, demos, demos, demos, demos, demos, demos
- China Specific
- QR code support
- iPad Only
- Fill with up apps
- Predictive area
- Pull apps into multitasking
- Access by swiping from bottom
- App Switcher
- Preservers paring
- Mission control view
- Includes control center
- Drag and Drop
- Flick for alternate keys
- Nested folders
- Spring loading
- List view
- Finder commands
- 3rd Party Storage
- Tap to unlock straight to notes
- Handwritten note search
- Search improvements
- Most Important Messages Surfaced
- Split view in full screen
- Storage optimizations
- Search improvements
- Interface improvements
- Persistant sidebar
- Improved recognition
- Synced between devices
- Editing Tools
- Selective color
- Edits sync with 3rd party apps
- 3rd party printing/publishig support
- Interface improvements
- Apple File System
- H.265HEVC compression
- Hardware and software accelerators
- Native support in Apple Pro apps
- Metal 2
- New features
- Further optimizations, now 10x faster than Metal
- 100x faster than OpenGL
- New debugging and performance analysis tools
- Mac window server running on Metal 2
- Mission control and dock far less overhead
- Metal performance shaders
- eGPU support and Dev Kit
- Metal for VR
- H.265HEVC compression
- New StoragePricing*
- Watch faces
- Activity Features
- Personalized motivational notifications
- Easier launch
- Automatic pool sets
- Pace per stroke type
- High Intensity Interval Training
- Multiple Workouts
- Sync with gym equipment
- Playlists start automatically with workouts
- Music controls in workout app
- New music app
- Multiple playlists
- Automatically syncs recently played music
- Greater app support
- Core Bluetooth
- Improved background tasking
- AirPods Automatically Pair
+ Appendix B
Newest Hardware Features
- 3 Major Features
- Great Sound
- Spatially Aware
- Mesh enclousre
- 7 beam forming tweeter array
- Precision acoustic horns
- Directional control
- 4” Upward facing woofer
- Automatic equalization
- Low frequency mic prevents distortion
- A8 Chip
- Smartest speaker ever
- Real-time acoustic modeling
- Audio beam-forming
- Multi-channel echo cancellation
- Spatial Awareness
- Pairs Bluetooth automatically
- Detects space its in automatically
- Subdivies audio into various frequency bands
- Demo centering vocals, direct energy, ambient audio
- Automatically pair and adapt with multiple units
- Your Apple Music library
- Listens for “Hey Siri”
- Siri trained for music knowledge
- Trained for assistant tasks
- HomeKit device support
- iPad Pro
- 10.5” and 12.9”
- A10X Fusion
- iPhone Cameras
- Giant Retina Flash
- USB 3 transfer speeds
- Fast charging support
- Smart Keyboard Cases and Leather Sleeve
- 64GB, 256GB and 512GB options
- 10.5” and 12.9”
- Improved Display
- 21” maxes out at 32 GB 2400 MHz DDR 4 user replaceable
- 27” maxes out at 64 GB 2400 MHz DDR 4 user replaceable
- Fusion Drives
- Standard on 4k and 5k models
- 50% faster SSD
- Up to 2 TB
- Improved Graphics
- Baselines 80% faster with Iris Plus 640 64 MB
- 4k Radeon Pro 555 2 GB and 560 4 GB
- 5k Radeon Pro 570 and 575 4 GB and 580 8 GB
- 5.5 teraflops max
- iMac Pro
- Space Grey Finish
- 10-key wireless keyboard
- New cooling system
- 80% increase in thermal capacity
- $4999 price point is a steal
- Space Grey Finish
- MacBook Air*
- MacBook Pro
+ Appendix C
- Apple File System (APFS)
- 10.0 Cheetah
- 10.1 Puma
- 10.2 Jaguar
- 10.3 Panther
- 10.4 Tiger
- 10.5 Leopard
- 10.6 Snow Leopard
- 10.7 Lion
- 10.8 Mountain Lion
- 10.9 Mavericks
- 10.10 Yosemite
- 10.11 El Captan
- 10.12 Sierra
- iOS (unnamed at launch)
- iOS 2 (iPhone OS 2)
- iOS 3 (iPhone OS 3 until 3.2 where all versions suddenly became iOS to accommodate the iPad)
- iOS 4
- iOS 5
- iOS 6
- iOS 7
- iOS 8
- iOS 9
- iOS 10
- 1998 G3
- 2004 G4
- 2006 G5
- 2006 Intel
- 2007 Aluminum
- 2009 Unibody
- 2012 Slim
- Mac Mini
- Mac Pro
- 2008 MacBook Air
- MacBook Pro*
- iPhone 3G
- iPhone 3GS
- iPhone 4
- iPhone 4S
- iPhone 5
- iPhone 5S
- iPhone 5c
- iPhone 6
- iPhone 6S
- iPhone 7
- iPhone SE
- 2010 1st Generation
- iPad 2
- iPad 3
- iPad 4th generation
- iPad Air
- iPad Air 2
- iPad (2017)
- iPad Mini
- iPad Mini (retina)
- iPad Mini (3rd generation)
- iPad Mini (4th generation)
- iPad Pro 12.9”
- iPad Pro 9.7”
- iPad Pro 10.5”
- Apple Pencil
+ Appendix D
- Apple History
- Apple Acquisitions
- Apple Acquires Beats
- Apple Acquires Beddit
- Apple Acquires NeXT
- Apple Acquires Workflow
- Apple announces Apple Pencil
- Apple Campus
- Appel Hires Anand Lai Shimpi
- Apple Hires Breaking Bad Executives
- Apple Hires Jonathan Zdziarski
- Apple’s Stance on Cookie Tracking
- Aperture Closes
- Carbon API Gets Deprecated Suddenly
- Dr. Dre filming Apple’s first show
- EU Proposes Encryption Ban
- Invites John Gruber and 5 other journalists to discuss the future of the Mac Pro
- iOS 10 nears 80% adoption
- Google Caught SurriptiouslyTracking Safari Users
- Jean-Louis Gassèe on Apple’s Culture After 10 Years of iPhone
- Jimmy Iovine talks Apple Music
- John Gruber calls out lack of substantial leaked information prior to the 2017 Keynote
- John Siracusa Hangs Up the Keyboard
- Jony Ive Profile
- Kevin Lynch Introduces the Watch in 2014
- Organizational Restructuring (Craig in Scott out)
- Phil Schiller Announces the redesigned Mac Pro
- Scott Forestall Profile
- Scott Forestall breaks his silence regarding iPhone development
- Steve Jobs introduces iPad
- Tim Cook talking about Ecosystem
- Tim Cook sports Glucose Monitoring band
- Tim Cookstates iPad Pro is Apple’s Closest Vision for the Future of Personal Computing
- Tim and Bono Awkwardly Touch Index Fingers
- Trent Reznor talks aboutApple Music
- Developer Tools
- APFS- Guide to file management with Apple’s new file system (2016)
- ARKit - Apple provides first class mixed reality tools
- Core Bluetooth - Allows developers to access bluetooth radios in Apple devices
- CloudKit - Allows developers to access iCloud services
- Core ML - Apple’s API to implement various trained models
- Core Motion - Apple’s API to access accelerometergyroscope
- Metal - Apple’s custom graphics drivers
- MusicKit - Developers can integrate Apple Music
- Safari 11
- SceneKit - Apple’s 3D graphics Engine
- SpriteKit- Apple’s 2D gaming Engine
- SteamVR- Steam’s VR sdk now coming to Mac
- Perceived Weaknesses
- A Very False Narrative: Samsung Galaxy
- A Very False Narrative: Microsoft Surface
- A Very False Narrative: Apple Watch
- Watch Battery Life Concerns
- Apple Is Doomed
- The Macalope exists completely to skewer this narrative
- Apple Maps
- iPod Killers
- Major apps removed from watchOS
- John Gruber’s Response
- Ex-Siri Engineers Frustrated
- Jim Dalrymple Response
- App Store houses over 2.2 million apps, 149 million new downloads per day
- Apple Maps 3.5x more popular than Google Maps on iOS
- Apple Music 2nd Largest Digital Streaming Service
- Apple Music Hits 27 Million Paid Subscriptions
- Apple Pay Most Prevalent Payment Solution in US
- Watch sales estimated at 7 million
- iCloud has 125 million Paid Subscriptions
- iMessage transferred 200k messages per second in 2016
- $7 Billion in Q1 2017
- Siri Supports 36 Languages
- Tim Plans Too Double By 2020
- Would Qualify as a Fortune 100 Company
- Version History
- Launch features
- iOS 4
- Apple’s First Stab at Multitasking
- iOS 4
- Lifewire version tracker
- Launch features
- Ben Thompson Whither Liberal Arts
- Federico Viticci dreams up a better iPad OS
- Google Assistantis doing very well, Siri is struggling
- M.G. Siegler’s Falling Apple Revisited
- Marco’s Frustration with Software Performance, and Phil’s Response
- Patrick Gibson asserts: Google is Improving at Design Faster than Apple at Services
- WeChat in China
- The Navbar is tapped out
- WWDC 2017
- Watch the 2017 keynote
- Watch the 2017 Platforms State of the Union
- Week in Review
- WWDC 2017Workshops
- WWDC Returns to San Jose
+ Appendix E
- H.265 or HEVC explained
- Computer Specs
- Cost of iMac Pro competitive workstation
- Launch Day Doodle
- The Internet
- Famous Website’s Home Pages in 1996-1997
- Cost of iMac Pro competitive workstation
- Netflix streaming in 2010
- Playstation Eye
+ Appendix F
How It Works
- How big is an Atom? 10nm is about 90 silicon atoms packed tightly.
+ Appendix G
Enable Single Sign-On
+ Appendix H
- Accidental Tech Podcast
+ Glossary of Terms
- What does 2x and 3x speed sound like?
- What does 120 Hz mean?
- What is AR?
- How do amplifiers work?
- What is a bass radiator?
- What are blend modes?
- What is a backup clone?
- What is a checksum?
- What is color banding?
- What is classic Mac OS?
- Who is Eddie Cue?
- What is a Die Shrink?
- What is differential privacy?
- What is DirectX?
- What is Dithering?
- What is an eGPU?
- What is ECC RAM?
- Who is Epic? What is Unreal Engine?
- What does excursion) mean?
- Who is Craig Federighi?
- What is Flash Storage?
- Who is Scott Forestall?
- What is Full Disk Encryption?
- What is H.264?
- What is H.265?
- What is HEVC?
- What is the HTC Vive?
- What are File Permissions?
- What is a Fusion Drive?
- What are Hard Links?
- What is Industrial Light and Magic (ILM)?
- Who is Jony Ive?
- Who is John Knoll?
- What is Linux? Is MacOS a distro of Linux?
- What is metadata?
- What is a microarchitecture?
- What is Microsoft Exchange?
- How do microphones work?
- Why do Moviesf play at 24 Hz?
- What is Moore’s law?
- What was NeXT?
- What was OS X?
- What is a P3 display?
- What are File Permissions?
- What was Ping?
- What is a (Hard Disk) Platter?
- What is push email?
- What is a Snapshot?
- What is a Sparse File?
- What is a SDK (Software development kit)?
- How do speakers work?
- What is a teraflop?
- What is Thunderbolt 3
- What is True Tone?
- What is the Tick-Tock Model?
- Why does everyone always make “These Go to 11” Jokes?
- What is TRIM support?
- What is Unity?
- What is Unix?
- What is Visual Inertial Odometry (VIO)?
- What is Write Coalescing Mean?
- What is ZFS?
To those of you who have read this far, unbelievable, words cannot express my gratitude. Please feel free to share this as a resource for others, simply give attribution where its do, both to me and the people I've cited.
Special thanks to everyone who set the groundwork for this anthology. John Siracusa, John Gruber, Rene Ritchie, Jacqui Cheng, Serentiy Caldwell, Anand Lai Shimpi, Benedict Evans, Ben Thompson and Ben Bajarin in particular, all of your hard work has had a profound impact on how I live in the world.