Fail Faster

The fine folks over at Extra Credits have an excellent video on a topic that is near and dear to my heart: failing faster. The gist of the video is that it is important to learn how to use tools such as low-fidelity prototypes to validate an idea. The key takeaway is that you want to learn from your mistakes as quickly and as cheaply as possible. Waiting until you have the perfect idea all figured out takes too much time (and really, you won't have it all figured out). Likewise, immediately jumping into writing code means that fixing your mistakes is much more expensive to do (and you will be more hesitant to do so).

I have personally been involved in this type of situation many times throughout my career. In one particular instance, I was part of a team working on a new major feature for an app. Unfortunately, the development process devolved into 'prototyping in code' as major changes were made on a daily basis to the visual design, user flow, and business logic of that feature. This was a terribly expensive way of figuring out how things should work. When we tested the feature with a few handpicked users, the flaws in our design were immediately obvious. We thought that the design was generally good and understandable (albeit with a few rough edges), but the participants in the user testing pointed out sizable problems with the design that made it clear that this feature was not ready to ship. It's as if we were blind to our own design.

After this particular experience, I championed the idea of using interactive prototypes for further design iterations. Each design iteration consisted of 'tappable screenshots' that our test users could try out and use to provide us with feedback. Making changes to a particular screen or to the user flow was as simple as dropping in a new image file from Photoshop or designating a new tappable area on an existing screen. The turnaround time for these changes could be measured in minutes or hours instead of days and weeks. In the end, the ability to 'fail faster' with the interactive prototype helped to make the feature better in a shorter period of time than what could be done with code.

Folks, I know it can be tempting to immediately jump into code; that's pretty much what developers are inclined to do. However, understand that it may not always be in your best interests to do that. Find cheaper and faster ways to validate your ideas.

Bonus:

There are many different tools that can be used to help you 'fail faster'. These are the ones that I use on a regular basis:

  • Pen and paper or a whiteboard - you really can't get much faster and cheaper than this.
  • POP (Prototyping on Paper) - this app makes it easy to take a photo of things that I have made in my sketchbook or whiteboard and add tappable hot zones with transitions.
  • InVision - this web app provides a lot more horsepower in terms of the transitions, collaboration, and version control that it supports. I use this with Photoshop mockups to provide a more "real" feel than what POP provides.

Don't Be Too Hasty To Do-It-Yourself or Pick-Something-Off-The-Shelf

A common question I get from folks that want to create an app or website with a backend system is what I would recommend they choose for a toolset. Should they use something baked into the platform like iCloud? Should they use a multi-platform service like Parse or Heroku? Should they write their own backend system from scratch?

The answer? It depends.

Honest. That's the truth. That's not an attempt to evade the question.

Rather than immediately jumping to answer the question, I always ask my own question to gain more context:

What are you trying to accomplish?

There are two different ends of the spectrum for that question. On the one hand, you have the Marco Arment (formerly of Tumblr and Instapaper) camp:

 

The common wisdom, which Justin suggests, is to go directly to a highly abstracted, proprietary cloud service or a higher-level hosted back-end — the kind that are so high in the clouds that they call themselves “solutions”. But the “BaaS” landscape is still very unstable with frequent acquisitions and shutdowns likely, and hosting on VPS-plus-proprietary-services clouds like Amazon Web Services or higher-level services like Heroku or App Engine can get prohibitively expensive very quickly. Developers who build everything on these services by default would probably be shocked at how cheaply and easily they could run on dedicated servers or unmanaged VPSes.

On the other hand, you have the Brent Simmons (Q Branch / Vesper) camp:

 

Well, my first thought was I don't want to run an actual server. I don't want to do that. Life's too short; I have to write code.

I often see debates on Twitter, blogs, or podcasts about the merits of both approaches. Depending on the particular biases of the author or host, the result is typically choosing one of the two extremes. Before jumping into one side or the other, however, it's important to understand the fundamental assumptions being made and what each side is attempting to accomplish.

The fundamental assumptions behind the Do-It-Yourself side, as exemplified by Marco, are that you are trying to create something that will need the greatest amount of flexibility and independence. By picking this extreme, you are deciding that the control over your own destiny outweighs the burden of creating and continuing to maintain your own backend solution.

The fundamental assumptions behind the Pick-Something-Off-The-Shelf side, as exemplified by Brent, are that you are trying to create something that will need the least resistance in reaching fruition. By picking this extreme, you are deciding that the effort saved by outsourcing outweighs the risk of not completely owning your backend solution.

Of course, these are two opposite ends of a spectrum. The solution that meets your particular needs will probably fall somewhere in the middle.

By all means, if you are intent on creating The Next Big Thing, then it makes sense to do things yourself and not be at the mercy of platform owners. However, if you are building something as a hobby then it makes sense to offload the things that aren't core to your interests.

It also makes sense to consider whether this is intended to be the start of a business or is intended to be a learning experience. In the former case, you have to weigh the tradeoffs between controlling your livelihood and getting to market quickly. In the latter case, you have to weigh the tradeoffs between focusing on breadth versus depth.

Folks, don't be too hasty to do it yourself or pick something off the shelf. It's not a simple decision.

Announcing The More Than Just Code Podcast

I'm proud to take part in announcing a new podcast that is now available. That podcast is called More Than Just Code.

It's a weekly show that covers topics that impact iOS and Mac developers. As the show's title suggests, we also consider the business perspective on each week's topics (i.e. 'more than just code').

The show is co-hosted by a transcontinental panel of developers: Tim Mitra, Aaron Vegh, Mark Rubin, and yours truly, Jaime Lopez.

Folks, check it out (and subscribe!): http://mtjc.fm/

Update Feb 25, 2015: New website URL, folks.

Teach Users How to Use Your App by Having Them Actually Use Your App

It's late at night. The empty cans of Red Bull tower over your desk precariously. You've done it. You've finally created your beautiful, polished, delightful app. The blood, sweat, and tears will all be worth it once you hit that delicious button to submit to the App Store.

You hesitate. You have a sense of worry gnawing at the back of your mind. 

What if users don't immediately comprehend my glorious design?

What shall I do?

I know! A tutorial! That's the ticket!

Netflix app for iOS (image via http://www.mobile-patterns.com/coach-marks)

Netflix app for iOS (image via http://www.mobile-patterns.com/coach-marks)

Suddenly, your beautiful app isn't so beautiful anymore. You've decided to smack the user in the face with a brain dump tutorial.

...

Why is it that so many apps fall for this trap? The most common reasons seem to be that app developers run out of time to properly implement a tutorial system or the developers fail to realize that the onboarding experience is an integral part of the app that requires just as much design effort (perhaps even more effort) as the rest of the app. Yet, it is still common to see apps that don't give much thought to how users will learn to use the app.

Whatever feelings you may have about Facebook's Paper app, they at least took a relatively uncommon approach to the problem of teaching users how to use an app with an uncommon design. While it may be somewhat heavy-handed at times, the tutorial system in Paper clearly took a bit of time to design and implement. In fact, the Facebook Paper team gave a presentation on how they approached the problem with contextually aware tutorials.

Developers can look outside of the traditional app development industry for inspiration as well. The game industry has spent decades working on this very problem. Take, for example, this analysis of the first level of Super Mario Bros. by the folks at Extra Credits:

The game designers at Nintendo carefully crafted the first level experience to teach players the skills that they will need throughout the game. They did so without dumping a bunch of explanatory text right at the start of the game or requiring that a player read the manual.

You might very well ask, 'how can these same design principles be used when creating an app?'

For starters, you should consider what your first-run experience is like for a user. Does your app have a bunch of empty states? Design a way for those empty states to have a call to action or design the app's first-run experience so that the user doesn't have those empty states to begin with (for example, pre-populating an app with content that the user is reasonably expected to enjoy). Does your app involve a complicated ecommerce transactional experience? Design a way for users to get progressive disclosure on where they are in the process.

Folks, Super Mario Bros. doesn't bombard players with every possible bit of information they could ever need at the beginning of the game, and there is little reason why apps should be any different. Teach your users how to use your app by having them actually use your app.

Bonus:

If you enjoyed the design analysis done by Extra Credits, you might also enjoy these videos (warning, they are longer videos and may include profanity so be careful at work).


Hiring for 'Culture Fit' is Absolute Garbage...and Absolutely Important

Perhaps one of the most irritating terms to arise during the current tech renaissance is 'culture fit'. What, pray tell, does the term mean?

Ostensibly, 'culture fit' is intended to mean that a potential job candidate will immediately 'gel' with the existing team. Once critical 'culture fit' has been achieved, a company will reap the benefits as the Borg-ified development team bangs out line after line of beautiful, scalable, and coherent code. Launch parties ensue, followed shortly thereafter by an IPO. Investors cheer. The team eventually celebrates being flush with success and cash by sipping Mai Tais on a tropical beach.

Unfortunately, that isn't the reality. In the best case, the team suffers from a lack of diversity and is held back from achieving great results as it becomes mired in groupthink. In the worst case, 'culture fit' is a means by which a team can turn into a members-only club that keeps out individuals who are deemed different (for example, by being women).

'Culture fit' is a terrible thing, right? Something that should inspire us all to grab our pitchforks every time the foul utterance escapes some fool's lips?

Actually, no.

Why? If for no other reason, because no one wants to work with a jerk. Making sure that someone isn't going to turn into a team cancer or a morale-killing psychopath is important. The workplace isn't meant to be an episode of Game of Thrones, folks.

Development teams shouldn't be looking for a candidate's 'culture fit' if by doing so they are only evaluating the rather shallow layer that can be quickly and easily perceived. Does this person look similar to me? Does this person have the same background? Does this person have the same interests? These questions aren't important.

What is important? A person's 'culture fit' in terms of their values. Specifically, the values that matter for the company's success. Take, for example, Atlassian's stated values. That company places an emphasis on being honest with its employees and customers. The best way to evaluate a candidate's 'culture fit' for Atlassian is to determine if the candidate shares the same emphasis on honesty.

Folks, you should evaluate potential job candidates on their competencies and their ability to embrace the company's values. Don't use 'culture fit' as an excuse to exclude people who are 'different'.

Treat Employees with Respect, Especially When Firing Them

Josh Constine, in a post at TechCrunch:

That’s why the Denver-based Beatport was considering firing the employees over a conference call, but decided to send human resources representatives to SF. The company worried employees would destroy the office if not supervised. Meanwhile, multiple sources report that the startup has let go of around 20 employees in Denver, including the majority of the engineering team there. Two other music industry sources say Beatport was still operating at a loss after Q3 saw it lose $1 million on $12.1 million of revenue.

It's ridiculous that a company would ever consider firing employees via a conference call. It's incredibly insulting to the employees that this was even a possibility.

Being a manager is difficult. Having to break difficult news to employees comes with the territory. When it comes to people's livelihood, the proper way to let them know that they will no longer be employed is to do so privately and in-person if at all possible. Ideally, there should be a transition period where the employee can transfer his or her responsibilities and knowledge to others that remain at the company.

Think about it for a minute. Would you feel betrayed or angry if an employee suddenly left the company with little to no notice? Of course you would. Professional courtesies apply to both the employer and employee. The manner in which you treat employees at the beginning and the end of the employment relationship says a lot about your company's core values and about your abilities as a manager.

Folks, treat others as you would like others to treat you.

The Complex World of the Simple, Tiny, Insignificant Progress Bar

One of my earliest professional programming tasks was to create a progress bar for an application. The requirements were simple, really: show the user how far along the application was in its processing of a new data set. Easy, right?

Wrong. As it turns out, for many tasks it is actually non-trivial to display a meaningful progress bar to the user. Predicting the future can be difficult, it seems.

In fact, there is some serious HCI (human-computer interaction) research being carried out regarding the progress bar and its impact on user experience. One such paper focused on displaying the progress information in a variety of ways--while keeping the task time constant--and showed that people can consider a progress bar to be faster or slower even though the time it took to complete was exactly the same. A follow-up paper further explored the progress bar by changing visual attributes (for example, making the bar pulsate at different frequencies). Once again, the time to complete was always the same but the human perception of the passage of time was varied.

Just for fun, check out some of these progress bars: http://www.animatry.com/blog/progress-bars

Do any of them seem familiar? There is an excellent chance that you've encountered them while copying a file in Windows or installing the latest version of OS X. I bet you've even programmed one of these.

As for my younger self, I spent quite a bit of time implementing and refining the progress bar for that data installation process. The exact design of the final solution has been lost to my memory in the mists of time, but as I recall it was some combination of an overall percentage with a 'live' update of the current files being processed. The overall percentage gave the user a rough sense of how far along the application was in the process, and the file names (which flew by far too quickly to be meaningfully absorbed by the human consciousness) gave the user the reassurance that the process was indeed continuing and hadn't gotten stuck somewhere. Perhaps not the optimal solution, but it got the job done.

Folks, don't let anyone tell you that adding a progess bar is an 'easy' task.

Don't Do Retrospectives Unless You're Going to Do Them Right

The odds are good that you've been involved in some sort of retrospective meeting. It may have been called something else such as the popular 'post-mortem', but the purpose is generally the same for any given software release/sprint/iteration: figure out what went well and what didn't go well. Why, then, do so many retrospectives go awry? In my experience, there are three very common reasons: there are too many items in the 'must improve' list, there is no follow-up on the items in that list, and the list isn't very good to begin with.

Too Many Items 

One common problem I've seen is for a team to put too many items on the list of things to be improved for the next iteration. Ever had one thing to do? How was that? Even if it was a difficult task, at least you could wrap your head around it. Ever had a thousand things to do? How was that? Overwhelming, right? Having too many items on a 'must improve' list is arguably as bad as not having a list at all. While it is often important to document all the ideas on what could be improved, it is best to focus on a handful of items (ideally, one or two) that could be improved for the next iteration. If your team improves one or two things every iteration, then that is continuous improvement. 

 

Overwhelming.

Lack of Follow-Up 

Even if you manage to decide on a small number of improvements to be made, you can still run into trouble by failing to follow up on the tasks that will implement the improvement. If no one has the responsibility of ensuring that the tasks get done, then it's quite likely that the tasks will not get done. It doesn't really matter if you have your manager, scrum master, team lead, or intern as the responsible party for keeping an eye on the improvement, what matters is that someone is making sure that the tasks get completed. By the way, the person responsible for ensuring that the tasks get done doesn't necessarily have to be the same person to actually implement the improvement. They just need to make sure that the tasks don't fall through the cracks during the heat of battle. 

Nothing But Whining 

Okay, so you have a small list of improvement items and someone is assigned to make sure that those items are completed. Everything is great, right? No. You can still have problems if your list isn't very good to start with. While it is common to focus on things that went wrong during an iteration, it is important to remember the things that went well too. It's too easy to get caught up in having 'improvements' that revolve around negative things (e.g. "must make sure that we get the specifications from the customer") and lose sight of the positive things that have been done that could be further improved (e.g. "integrating our source control with our bug tracking system was great, maybe we can integrate that with our help desk"). Improving on your improvements is allowed.

Folks, I'm not going to claim that this is an exhaustive list of things that can be done to make sure that your retrospectives are fruitful. What I will claim, however, is that committing to a small list of well-thought-out improvements will make your software development life better.