Jonathan Deutsch's weblog covering mac app development, engineering management, running a startup, HTML5, and all things Apple.

Making the System Setting Cube Farm Livable

Office Cube Farm

Office Cube Farm, reminiscent of Ventura’s System Settings UI

Much has been said about macOS 13 Ventura’s upcoming System Preferences Settings redesign. Quality of work aside, this release represents the most drastic change System Preferences has seen since the Mac OS X Public Beta in 2000. Yet it appears to be mostly re-arrangement, converting meticulously-laid-out controls to long scrolling lists. I liken the visual refresh to an office cube farm – inexpensive for the developer and unmemorable, uncultured, and soul-crushing for the user.

Often when developers convert a bespoke UI to a grid-based UI, it is due to refactoring each item to a common paradigm. This reduces backend complexity and lets all items benefit from central improvements. Unfortunately, it does not look like this was a driving force behind Apple’s redesign. I see no evidence the System Preferences team has rethought OS-level settings primitives or tried reconsidered how users should be able to customize and control their OS. The new System Settings is instead focused on blindly matching the design of iOS, resulting in arguably inappropriate UI patterns for a desktop platform.

It got me thinking… what if System Preferences was reimagined? What could have been better for the last 22 years if it was well-structured under the hood? What might be easier with all settings sharing an underlying setting primitive data structure (even if it is displayed in a cubicle UI)?

Here’s the first few ideas off the top of my head (from my twitter thread):

  • All settings changes should be undoable (via ⌘Z, and redo via ⇧⌘Z).
  • There should be a list of the most recently changed settings.
  • There should be an easy way to restore defaults for a particular setting or group of settings.
  • Time Machine should allow viewing a diff of settings and rolling back to previous states.
  • Each setting should be deep-linkable. When a friend or family member asks how to do something, I can send them a link to the exact spot. When clicking on the link, it will highlight the entry.
  • Taking it further, I should be able to also transmit the setting or group of settings via this deep link URL. Of course, there will need to be a confirmation before enacting the changes.
  • Every setting should be able to be retrieved and modified via programmatic control. AppleScript is still fine by me.
  • All changes from the defaults should be easily exportable and transferrable to other machines.
  • I should be able to automatically sync relevant settings across different machines and devices via iCloud. Apple has done this to a small degree already with Focus states and Internet Accounts. We’re in the 21st century now, so all settings should have this option.

The information hierarchy and location for settings continues to be a fundamental problem. In the Ventura and pre-Ventura System Preferences, there’s no top-level categorization; often, disparate panes are grouped together, and there’s a mishmash of table groups, tabs, and sheets hidden behind buttons for sub-groups. Here’s a few examples:

  • “Desktop & Dock” has groupings for Dock, Menu Bar, Windows & Apps, and Mission Control. I would not have any clue Mission Control is there from the title and even opening the pane, as it is at the bottom of the long scrollable list. Implied in the pane name but missing is anything related to the Desktop, aside from an option for the Show Desktop keyboard shortcut, shown in when you click on “Shortcuts…” at the bottom. Of course, this has duplicate information in the Keyboard’s “Keyboard Shortcuts,” which itself is a whole sheet with a sidebar.
  • Displays has an “Advanced…” button, which has a “Battery and Energy” setting group! I’d never know that the option to prevent sleep when connected to the power adapter when the display is off would be tucked away here.

These messy hierarchies are common to most panes, and it is not a new problem. It isn’t a particularly easy problem to solve, either. A perfect hierarchy is impossible: some settings can be thought of as belonging to multiple areas (especially ones cutting across the system, like accessibility or privacy). Some settings have dependencies or cause multiple setting changes. However, there’s still plenty of room for improvement:

  • Each setting should have a clear primary hierarchy, one which does not allow combined “&” groupings. It starts with high-level categories in the OS (e.g., “hardware”, “software”) and extends as far down as needed.
  • This hierarchy is entirely exposed through a browser or outline view, not a simplistic sidebar. All groupings use the same UI, and nothing is hidden behind a labyrinth of buttons.
  • Each setting should also allow for alternate hierarchies under the hood to help with searching.
  • Each path component in the hierarchy can be considered a tag. Therefore, when searching, the browser/outline view can appropriately surface where matching items exist in the hierarchy.
  • The search could act as a tag browser, allowing me to AND items like “accessibility” and “mouse” to see the options that cut across both.
  • The search display could be made bigger and have a search history.
  • The search should be able to find dynamic items, such as those which may be populated when applications are installed.

The root of System Preferences’ problems is the sheer quantity of settings. As a software developer, it is often hard to make the call between allowing customization for users and avoiding complexity – both in the UI, and in my own test matrix. With only a few exceptions, Apple has been commendably conservative at adding new settings. 22 years of macOS adds up, and the complexity is there, like it or not. Hopefully, this post has a few ideas that Apple could ponder to help manage some of the complexity.

What do you think? What other “big picture” changes could be made to the System Settings?

Thanks to Ken K., Neil J., and Giovanni D. for feedback on this post.

I can’t help myself

From The Economist’s "The case for eating steak and cream":

Work by Keys and others propelled the American government’s first set of dietary guidelines, in 1980. Cut back on red meat, whole milk and other sources of saturated fat. The few sceptics of this theory were, for decades, marginalised.

I believe the word the author was looking for was margarinized.

The Mythical MacBook 4G

Marco Arment says:

If Apple wants to offer 4G in MacBooks, they can start whenever they want. Doing it properly will just take a bit more effort than adding a modem.

They can’t do it whenever they want. It requires new regulatory approvals as well as the much harder testing and certification with carriers (domestic and international). This would be required for each MacBook model.

Perhaps the Google Chromebook Pixel will force Apple’s hand to building it and dealing with all the red tape. Like Marco, I’d at least love for connection APIs to be in place. When tethering, I don’t relish that Software Update might be downloading big updates or Xcode is grabbing large docsets.

Glass – A Solution in Search of a Problem

When Google’s Project Glass was first unveiled ten months ago, they wrapped up the announcement post asking the community, “What would you like to see from Project Glass?” Today revealed more about the unreleased product and continue the thread by begging for community involvement: “Using Google+ or Twitter, tell us what you would do if you had Glass, starting with the hashtag #ifihadglass.” Does Google not know what to do with the Glass?

Can you imagine if Apple announced the iPhone and said, “We’ve got a great always-on internet connected pocket sized device with a touch screen. What would you use it for?”

As Robert X. Cringely remarked in Triumph of the Nerds, for any new technology to gain acceptance, it needs a killer application. Most of the Glass’s features can be accomplished with a smartphone, and the convenience in the form factor is akin to a Pebble watch. By my read, Google has publicly admitted it doesn’t know what the killer app is for the Glass. Truly, the Glass is a solution in search of a problem.

This isn’t necessarily a death sentence. The PC itself didn’t have a killer application for many years until the first spreadsheet came along. With enough people excited (count me as one of them), perhaps an idea will emerge that puts Glass on the map before it is socially shunned out of existence.

Update: There’s been an argument this is a specific (and implied brilliant) marketing strategy. Google Glass won’t be given away for free. It is a consumer product, which will be sold. Sales happen when customers believe it will better their lives for more than the cost… so what is the value proposition which would make someone spend $1,500 for one? That’s essentially what Google is asking, but it is what they should be telling. That I won’t drop it on a roller coaster or when practicing Kendo isn’t reason enough. For a product in the shape of glasses, there’s a surprising lack of vision.

This is disappointing, because there will be a clock running on its life. If Google can’t find its value proposition/killer application soon enough, then customers will not buy it and developers will not spend time creating applications when there is no market. Given the lack of applications shown in the video, I’m a little surprised the glass site is so heavily consumer-targeted without any SDK information.

I’d love to try to answer the question what I’d do if I had a Glass. As noted the potential is staggering and few products have intrigued me as much as this. But my brainstorming is hits a wall when I think of practical constraints which need to be considered. What is the battery life? What is the resolution on the screen and camera? How accurate is the positioning information? What are the general latency characteristics of its sensors? And so forth. Creativity lives on the ground, not in the sky.

Apple doesn’t dictate experiences, but it does communicate uses for the product. Remember the original iPhone ads? Given how new the device and technology was, this seemed wise to me (say, compared to the iPod silhouette ads — everyone knows what a music player does). The Glass is in the same category — something entirely new.

Making Your Own Misfortune

“Make your own luck” is a great philosophy to have on the world. It emphasizes a forest-scoped perspective for being at the right time and places conducive to exciting new opportunities.

I’ve recently considered the less optimistic flip-side. If you said, “accidents happen,” a counter would be, “not if you are careful.” In other words, don’t make your own misfortune.

Unnecessarily increasing the probability of an accident invites bad luck. Perhaps it is weaving through traffic instead of staying put in your lane. Next thing you know, you’ve collided with a car in your blind spot. Or it could be leaving a hot kitchen stove unattended; the apartment is burning down.

In poker, when you’ve been getting terrible cards all night it becomes easy to convince yourself that a mediocre hand is worth playing. But this too is making your own misfortune. Others will be playing better hands and once you’ve hit middle set it is painful to let it go. But you’ll lose, and it is all because you put yourself in a situation where there was a difficult choice, and then chose wrongly. Put yourself in situations with easy choices.

This doesn’t mean being risk adverse, but making sure your priorities are straight. Is it worth increasing the chance of death to get to a party 5 minutes quicker? The principle applies to issues aside from safety as well.

When programming, you could have an easy performance win by multi-threading a critical section. But down the road you’ll be inviting dead locks and race conditions when another engineer begins working on the project and doesn’t have their head fully around the codebase.

It may be an art to understand complexity tradeoffs, or perhaps it just takes experience. But it could simply require reflection – how might you be making your own misfortune?

Beware of CSS3 filter effect rendering differences across browsers

CSS Filter Effects are the new hotness in Safari 6 and iOS 6, and have been around since Chrome 18.  The blurs, shadows, brightness/contrast, hue shifts, and saturation shifting will add a new dimension of visual oomph to sites.  However, one aspect to be aware of is they do not render consistently across different browsers.  The look you’ve fine tuned on Chrome may be hideous on Safari.  Not only do Chrome and Safari render differently, but Safari renders differently than itself!  That is, when applying a 3D transformation, the graphics accelerated rendering path will appear differently than the non-accelerated path.  It is common to use a “-webkit-transform: rotateY(0deg);” as a way to hack Safari to the faster rendering path.  Chrome does not suffer from this issue, and looks consistent across Mac and Windows.  Interestingly, the effects with MobileSafari on iOS are nearly identical to Chrome.

Here’s the same effects (sepia, saturation, hue shift, brightness and contrast increase) viewed across different browsers:

[image removed]

I’ve filed <rdar://problem/12307742> CSS3 Filter Effects rendered inconsistently across Safari CPU path, GPU path, iOS, and Chrome.

If you’re only targeting one platform (or sure it will only use a specific rendering path in Safari), this should not be a problem.  Otherwise make sure to test across the different browsers which support the effects.  As of writing this post, Firefox, IE, and Opera still do not support them.

HTML5DevConf 2012 Talk

I recently gave a short talk at HTML5DevConf called “Best Practices for Building Tools that Output to the Web.” The session covered some of the assumptions you can’t make if the output of your tool is going to be used as part of a larger site. I also dived into some concrete decisions and implementations we used for Tumult Hype.

If you’d like an overview, please download the slides.

The True Value of Automation

Automation is a bedrock of software engineering. Tools, scripts, tests, and even reports are vital glue that keep projects progressing forward. Because time spent on automation generally detracts from product work, it is considered an investment. Sadly, more often than not I see an erroneous equation applied to determine whether the investment will be worthwhile:

(manual repetitions * time) - (time to develop automation) = value

If the value is positive, the automation is a green light. Otherwise, is it really a waste of effort? There are many factors not being considered.

Automation is less error prone.
Processes are put in place to account for human error, so why add another layer where humans can go wrong? Computers won’t forget to strip symbols before deploying or test a feature used by thousands of people. Automating processes adds a known and reliable security blanket so engineering can proceed with confidence.

Automation can be run any time, without humans in the picture.
Need test results at 11pm? No problem. Your build engineer is out with the flu? The latest revision has already been built! If you need 100 runs through a test plan in a day, the computer won’t complain. Automation is tireless.

Automation can be run at a high frequency.
Just as high frequency trading has changed the stock market, the ability to run scripts and tests more often can have a transformative effect on your processes. Having a packaged build always generated means others in your company can easily live on the latest bits or send one-off builds to users to make sure issues are fixed. Running unit tests on every commit will allow you to immediately pinpoint what code caused a regression instead of having bugs piling up. Older bugs are more expensive; it takes engineers more time identify and ultimately fix when they aren’t as familiar with the code. The same principal can be applied to higher level/UI tests and performance testing.

Automation can build on itself.
Writing an automated test is only the first step. Once you have tests, you can write infrastructure to kick these off. Then write tools to aggregate results. Next is generating reports. Complete the circle by automating emails of the reports to yourself and managers. Each step along the way may have a different return, but they are paved by the previous steps. This ultimately leads to an invaluable end-to-end workflow.

Automation is more fun!
Who rejoices in manual labor? Developing automation is a challenging activity and a great way to engage those on your team who have been burnt out by boring repetition. Each time a script is run, they’ll smile in knowing they’ve saved themselves time and work. This encourages even more automation to be written.

The next time a simple equation is levied against automation, be sure these factors are also considered. Of course, for some domains automation is not appropriate or prohibitively difficult. It is less flexible when encountering ambiguous inputs and falls down at doing anything ad-hoc. Automation must also be maintained and kept up-to-date. All said, from my experience the teams with effective automation setup are also those in the least technical debt and can be bigger risk takers.

To wrap up, please observe this discussion between an engineer and his manager:

Kirk: Scotty, progress report?
Scotty: I’m almost done, sir. You’ll be fully automated by the time we dock.
Kirk: Your timing is excellent, Mister Scott. You’ve fixed the barn door after the horse has come home.

Note, the manager now realizes the value of automating as quickly as possible!

Two Weeks

Programming is not mindlessly implementing specs nor is it determining the best big-O algorithm for a mathematical problem. Programming is art; turning raw code into an implementation of your ideas. Programming is what you do.

Software Engineering is what your company does. Software Engineering figures out the most efficient and profitable way to develop a product. Software engineering is as much about people, process, and schedules as it is about data structures and protocols.

Here’s the scene: you’ve just gotten a great idea on some new feature, and you’re discussing it with your [software engineering] manager. He’s1 enthusiastic, and then, like clockwork, the question comes:

“So, how long will it take to implement?”

The answer is two weeks.

From his perspective, three weeks will mean it is a large task with many variables, likely to take four. Telling him four weeks will make him question whether the project is worthwhile as there are other tasks you are needed to do. Five weeks will signal that you are incompetent. Bringing up the M-word (that’s “months”) will be a sure-fire way to lose his support.

If the feature is doable with one week’s hard work, you should still say two weeks. One week will set you up for coming in late — there may just be those variables you didn’t anticipate, or there could be another issue that comes up and sidetracks you. The extra padding can also give you time to refine your design as it develops and make a better demo. If you are able to finish in less than two weeks, you’ll look that much better.

Don’t worry, two weeks will work. Programming is art, and you’ve already got the picture in your head. A tight schedule will force you to improve your engineering by making the necessary tradeoffs without falling into a perfectionist trap. If you work really hard and stop checking facebook, you can work 12 hours a day for the next 14 days. This will give you 168 hours or the equivalent of a little more than 4 real weeks; plenty of time. Remember, you don’t need to deliver 100% of the feature in this period of time, only the easiest 80%. If the feature turns out great, you’ll have time to later refine.

Two weeks is also the average period between meeting times, so it is unlikely you’ll need to give an impromptu and risky demo. At this next meeting, you can impress your manager with the end result; it always looks good to show something. If the feature gets nixed, you will not feel that you have wasted a significant chunk of your life.

If other tasks on your plate will prevent you from immediately starting on your feature, you should not add this into the time estimate as the mentioned negatives will apply. Instead, split it up: “I can get started in two weeks, and it will take another two weeks after that.”

There are two main scenarios where you do not want to commit to two weeks:
1. A small project which should be measured in hours.
2. A large project which should use two weeks as milestone markers.

To wrap up, please observe this discussion between an engineer and his manager:

Kirk: Scotty How long will it take you to have everything automated?
Scotty: Oh, six weeks Captain, but yah don’t have six weeks so I’ll do it in two.
Kirk: Mr. Scott have you always multiplied your repair estimates by a factor of three?
Scott: Of Course Captain, how else would I keep my reputation as a Miracle Worker?

🙂

[1] Replace he/his with she/her if your manager is a woman.

A Kendo Story

A professional cockfighting trainer, Ki Seishi, was told to train a chicken by the King. After 10 days, the King asked: “Is he ready to fight yet?” Ki Seishi answered: “No not yet. He becomes blindly ferocious, and eagerly looks for an opponent.” Another 10 days passed and the King asked again. Ki Seishi answered: “No, not yet. When he hears another chicken crow, or when he senses another chicken’s presence, he will radiate his fighting spirit.” After another 10 days, the King asked again. Ki Seishi answered: “No, not yet. When he sees another chicken, he will glare fiercely and loses his temper.” When the King asked again after another 10 days, this time Ki Seishi said: “I believe he is ready now. Even if another chicken crows and challenges him, he will remain unperturbed, just like a wooden figure. This proves that he is full of virtue. He has got it now. No chicken is a match for him. Every chicken will run away when they see him.”