Showing posts with label software engineering. Show all posts
Showing posts with label software engineering. Show all posts
Tuesday, September 16, 2014
Sunday, September 7, 2014
Chasing the Latest Trends
I'm an engineer. I love playing with the newest toys. But acquiring those toys has a cost - and not just in terms of money. More important is the time invested to learn the new system and transition from the old.
But hey, for me, the benefit (fun) usually outweighs the cost (money/time).
A problem can arise, though, in domains where the technology is changing rapidly. In such a domain, the technology could potentially change so rapidly that if you keep up with everything, you are constantly in a learning mode and never get to a productive mode.
I'm going to go out on a limb and say that is what is currently happening with HTML and JavaScript libraries.
One large-scale web application I worked on started with the Prototype library (heh, yea). In mid-project we switched to OpenRico (remember that guy?). Then we said, "Oh hey, the newest thing is MooTools." Then there was the big one, jQuery. Backbone? Oh, Angular is so hot right now. But what about Google's new Polymer?
Granted, it's exciting to see the evolution of the web platform. But if you spend all of your time moving your project to the latest, hottest library so that you can trash talk all of the old fogies who still use the passé crap, you'll never be able to get any real work done.
But, come on, we can't still be coding in Cobol, right? Nobody wants to spend their whole career working on old technologies. Definitely! So how do we decide if we should stick with something old or jump to the new kid on the block?
Nassim Taleb (author of The Black Swan) gives the answer:
The longer a certain idea or technology has been around, the longer we can expect it to survive.
He uses the example of a chair. We may imagine the future with chairs made of exotic materials that fly. But the simple wooden chair has been around for thousands of years. Chances are, in 50 years, we will still be sitting on simple wooden chairs. Why? They have proven their effectiveness.
In terms of software, "future-proofing" is very very hard to do. But applying Taleb's principle can help us. For example, raw HTML has been around for a long time and so will likely stick around a long time into the future. So maybe we pick a library/framework that is as close to raw HTML as possible.
Another example: Relative to other JavaScript frameworks, jQuery has proven its effectiveness over a good span of time. Sure, there will be a new king someday. But it's prudent to wait and see who can prove more effective over time.
How long is that time? I don't know the answer to that; but I'm sure it varies by domain.
Again, as an engineer, this 'waiting' is against my natural inclination. I want to get the new toy asap. But if I'm responsible for a piece of software that my company will be maintaining years from now, I must force myself to make prudent decisions.
But hey, for me, the benefit (fun) usually outweighs the cost (money/time).
A problem can arise, though, in domains where the technology is changing rapidly. In such a domain, the technology could potentially change so rapidly that if you keep up with everything, you are constantly in a learning mode and never get to a productive mode.
I'm going to go out on a limb and say that is what is currently happening with HTML and JavaScript libraries.
One large-scale web application I worked on started with the Prototype library (heh, yea). In mid-project we switched to OpenRico (remember that guy?). Then we said, "Oh hey, the newest thing is MooTools." Then there was the big one, jQuery. Backbone? Oh, Angular is so hot right now. But what about Google's new Polymer?
Granted, it's exciting to see the evolution of the web platform. But if you spend all of your time moving your project to the latest, hottest library so that you can trash talk all of the old fogies who still use the passé crap, you'll never be able to get any real work done.
But, come on, we can't still be coding in Cobol, right? Nobody wants to spend their whole career working on old technologies. Definitely! So how do we decide if we should stick with something old or jump to the new kid on the block?
Nassim Taleb (author of The Black Swan) gives the answer:
The longer a certain idea or technology has been around, the longer we can expect it to survive.
He uses the example of a chair. We may imagine the future with chairs made of exotic materials that fly. But the simple wooden chair has been around for thousands of years. Chances are, in 50 years, we will still be sitting on simple wooden chairs. Why? They have proven their effectiveness.
In terms of software, "future-proofing" is very very hard to do. But applying Taleb's principle can help us. For example, raw HTML has been around for a long time and so will likely stick around a long time into the future. So maybe we pick a library/framework that is as close to raw HTML as possible.
Another example: Relative to other JavaScript frameworks, jQuery has proven its effectiveness over a good span of time. Sure, there will be a new king someday. But it's prudent to wait and see who can prove more effective over time.
How long is that time? I don't know the answer to that; but I'm sure it varies by domain.
Again, as an engineer, this 'waiting' is against my natural inclination. I want to get the new toy asap. But if I'm responsible for a piece of software that my company will be maintaining years from now, I must force myself to make prudent decisions.
Labels:
html,
javascript,
nassim taleb,
software engineering,
technology,
trends
Thursday, July 31, 2014
Digging for Requirements
I'm again going to refer heavily to The Pragmatic Programmer. In fact, the title of this post is one of the sections in chapter 7.
Here the authors mention that the term 'requirements gathering' is misleading. It somehow implies that the requirements already exist somewhere and just need to picked up. Instead, they say requirements need to be dug for.
Users understand their workflow, but not in the same way that an engineer needs to understand it. A user doesn't particularly care how much of his workflow is defined by the system architecture, company policy, laws of the country, industry standards, or even habit.
But an engineer cares. He cares because some of those factors will change faster than others. Some don't even need to be factors anymore. So the system that he builds has to be flexible enough such that the more changeable factors can be accommodated easily.
It's important to discover the underlying reason why users do a particular thing, rather than just the way they currently do it. At the end of the day, your development has to solve their business problem, not just meet their stated requirements.
The above quote hits the nail on the head: the stated requirements are usually not the actual system requirements. Hence there is a need to dig down into those stated requirements, find the real system requirements in there, and consider the rest as configuration options.
Here's the example the authors use. Let's say a stated requirement is: "Only an employee's supervisors and the personnel department may view that employee's records." It sounds reasonable enough, but it embeds company policy, which can change often. Digging around that stated requirement reveals the actual system requirement: "An employee record can only be viewed by a nominated group of people." Who comprise that group is simply a matter of configuration... the code doesn't care.
It's arrogant to say that an engineer understands a user workflow better than the user himself. However, the engineer obviously must understand it from a system architecture perspective. And since that is something users would never need to think about, stated requirements will almost always need some digging done.
Here the authors mention that the term 'requirements gathering' is misleading. It somehow implies that the requirements already exist somewhere and just need to picked up. Instead, they say requirements need to be dug for.
Users understand their workflow, but not in the same way that an engineer needs to understand it. A user doesn't particularly care how much of his workflow is defined by the system architecture, company policy, laws of the country, industry standards, or even habit.
But an engineer cares. He cares because some of those factors will change faster than others. Some don't even need to be factors anymore. So the system that he builds has to be flexible enough such that the more changeable factors can be accommodated easily.
It's important to discover the underlying reason why users do a particular thing, rather than just the way they currently do it. At the end of the day, your development has to solve their business problem, not just meet their stated requirements.
The above quote hits the nail on the head: the stated requirements are usually not the actual system requirements. Hence there is a need to dig down into those stated requirements, find the real system requirements in there, and consider the rest as configuration options.
Here's the example the authors use. Let's say a stated requirement is: "Only an employee's supervisors and the personnel department may view that employee's records." It sounds reasonable enough, but it embeds company policy, which can change often. Digging around that stated requirement reveals the actual system requirement: "An employee record can only be viewed by a nominated group of people." Who comprise that group is simply a matter of configuration... the code doesn't care.
It's arrogant to say that an engineer understands a user workflow better than the user himself. However, the engineer obviously must understand it from a system architecture perspective. And since that is something users would never need to think about, stated requirements will almost always need some digging done.
Sunday, April 20, 2014
Don't Think Outside the Box
Gordius, the King of Phrygia, once tied a knot that no one could untie. It was said that he who solved the riddle of the Gordian Knot would rule all of Asia. So along comes Alexander the Great, who chops the knot to bits with his sword. Just a little different interpretation of the requirements, that's all… and he did end up ruling most of Asia.
Such is the introduction to a truly insightful section of the book The Pragmatic Programmer. The ideas presented in this section, like many great software engineering principles, expand beyond the realm of programming.
Programming in any existing language requires clarity of thought. One significant component of that is the removal (or at least, verification of) assumptions. This idea is equally applicable to any problem we might face in the "real world." When trying to solve such a problem, assumptions can really be the enemy:
If the "box" is the boundary of constraints and conditions, then the trick is to find the box, which may be considerably larger than you think.
The key to solving puzzles is both to recognize the constraints placed on you and to recognize the degrees of freedom you do have, for in those you'll find your solution...
It's not whether you think inside the box or outside the box. The problem lies in finding the box - identifying the real constraints.
This really resonated with me. When given a problem, we are almost never given the full set of real constraints. Most constraints are assumed or imagined, leading us to believe we have a much smaller box. This leads to potentially less effective (and certainly less innovative) solutions.
So what should we do? The authors continue:
When faced with an intractable problem, enumerate all the possible avenues you have before you. Don't dismiss anything, no matter how unusable or stupid it sounds.
This isn't so ground-breaking. It's basically the idea of "brainstorming" that we all learned in elementary school. But the authors go a step further:
Now go through the list and explain why a certain path cannot be taken. Are you sure? Can you prove it?
This step is where I know I have failed in the past. I can generate a list of fantastical ideas, but I know I have been too quick to prune some of the more outlandish ones.
I won't make that mistake again.
Such is the introduction to a truly insightful section of the book The Pragmatic Programmer. The ideas presented in this section, like many great software engineering principles, expand beyond the realm of programming.
Programming in any existing language requires clarity of thought. One significant component of that is the removal (or at least, verification of) assumptions. This idea is equally applicable to any problem we might face in the "real world." When trying to solve such a problem, assumptions can really be the enemy:
If the "box" is the boundary of constraints and conditions, then the trick is to find the box, which may be considerably larger than you think.
The key to solving puzzles is both to recognize the constraints placed on you and to recognize the degrees of freedom you do have, for in those you'll find your solution...
It's not whether you think inside the box or outside the box. The problem lies in finding the box - identifying the real constraints.
This really resonated with me. When given a problem, we are almost never given the full set of real constraints. Most constraints are assumed or imagined, leading us to believe we have a much smaller box. This leads to potentially less effective (and certainly less innovative) solutions.
So what should we do? The authors continue:
When faced with an intractable problem, enumerate all the possible avenues you have before you. Don't dismiss anything, no matter how unusable or stupid it sounds.
This isn't so ground-breaking. It's basically the idea of "brainstorming" that we all learned in elementary school. But the authors go a step further:
Now go through the list and explain why a certain path cannot be taken. Are you sure? Can you prove it?
This step is where I know I have failed in the past. I can generate a list of fantastical ideas, but I know I have been too quick to prune some of the more outlandish ones.
I won't make that mistake again.
Wednesday, March 19, 2014
"He Who Can Do This Has the Whole World with Him"
Back in November of 2013, I posted about how to ask for a favor. In a nutshell: make it so easy that people don't notice they're doing it. This has clear implications for software design.
I just read that Cinemark went a step further: rewarding you for doing them a favor.
http://lifehacker.com/cinemark-rewards-you-for-turning-your-phone-off-during-1545581081
Now, I can't comment on the efficacy of this app, but I love the principle. I'll again reference Dale Carnegie: "There is only one way under high heaven to get anybody to do anything... And that is by making the other person want to do it." He further says: "So the only way on earth to influence other people is to talk about what they want and show them how to get it."
Let's think about the moviegoer with his phone on. Sure, he may not want to disturb others in the cinema. But when he gets a Tweet of a cat doing something funny (or Instagram buzzes with a picture of what his second cousin is having for breakfast) his personal desire suddenly trumps his concern for the rest of the audience.
Cinemark's tactic, then, is based on realizing that people care about their own desires much more than the desires of others.
Why has it been so hard to get people to behave in an environmentally-friendly way? This same reason! People care more about their personal inconvenience and/or expense than helping literally the rest of the planet by being green.
So what did Tesla do? Did they market their cars as a way to help the environment? No! That would be appealing to the wrong desire. They market their cars as high-performance status symbols. If you go to teslamotors.com right now, you'll see that the biggest statement on the front page is "THE HIGHEST SAFETY RATING IN AMERICA."
In other words, they are showing you how you can benefit yourself. They know that you really care about that more than the environment.
So whether we are building software or marketing a product, we must realize that the only way to get people to do something we want is to give them what will benefit them personally.
I just read that Cinemark went a step further: rewarding you for doing them a favor.
http://lifehacker.com/cinemark-rewards-you-for-turning-your-phone-off-during-1545581081
Now, I can't comment on the efficacy of this app, but I love the principle. I'll again reference Dale Carnegie: "There is only one way under high heaven to get anybody to do anything... And that is by making the other person want to do it." He further says: "So the only way on earth to influence other people is to talk about what they want and show them how to get it."
Let's think about the moviegoer with his phone on. Sure, he may not want to disturb others in the cinema. But when he gets a Tweet of a cat doing something funny (or Instagram buzzes with a picture of what his second cousin is having for breakfast) his personal desire suddenly trumps his concern for the rest of the audience.
Cinemark's tactic, then, is based on realizing that people care about their own desires much more than the desires of others.
Why has it been so hard to get people to behave in an environmentally-friendly way? This same reason! People care more about their personal inconvenience and/or expense than helping literally the rest of the planet by being green.
So what did Tesla do? Did they market their cars as a way to help the environment? No! That would be appealing to the wrong desire. They market their cars as high-performance status symbols. If you go to teslamotors.com right now, you'll see that the biggest statement on the front page is "THE HIGHEST SAFETY RATING IN AMERICA."
In other words, they are showing you how you can benefit yourself. They know that you really care about that more than the environment.
So whether we are building software or marketing a product, we must realize that the only way to get people to do something we want is to give them what will benefit them personally.
Labels:
Cinemark,
Dale Carnegie,
favor,
reward,
software engineering,
Tesla
Sunday, November 10, 2013
How to Ask for Feedback... or Any Favor
I don't remember much about 7th grade. But I do remember one time when I asked some of my classmates to help me fold sheets of paper to be passed to the rest of the class. I offered [what I considered] a tip as to how to do it more efficiently. But the response I got was, "When you ask someone do to you a favor, don't then ask them to do it faster."
Now, when we ask someone to give us feedback, we're really asking them to do us a favor. We want to improve whatever it is we're asking for feedback on and thus by providing it, they are helping us. So if they are already going to this effort to help us, we really shouldn't ask more from them than necessary. In fact, we should make it as easy as possible for them to help us.
Probably the most common mechanism for providing feedback today is the survey. Someone buys a product, uses a service, attends a class, etc and then fills out a feedback survey about it.
We can see the same thing with software and websites. There is often a form or something that vendors use to gauge the user experience of their digital product. I posit that such a mechanism tries the patience and goodwill of users. It makes them have to go out of their way - do extra work - to do a favor for the vendor. So some vendors offer small rewards for filling out these surveys, in recognition of this fact.
But I think we can do better. Consider this juvenile, but brilliant, example:
Why do I say this is brilliant? It takes something the user has to do anyway, and turns it into a feedback mechanism. The user doesn't have to do any extra work or go out of his way at all. He just does his normal business (heh) and has no choice but to provide feedback in the process.
I just checked and apparently it's no longer there, but Skype used to be a good example of this. You would make a call (over the internet) and when the call was over, you would have to close the 'call' window. But the only way to close it was to click a button that gave feedback on the quality of the call. Now, you have to close that window anyway. So it's no extra work for me to provide feedback while I do it. It's not intrusive. It's convenient.
This is the kind of mechanism we should be using more with software. Users are using it anyway, why not build in ways to gather feedback that don't disrupt their workflow?
I, personally, am more than happy to provide feedback to software companies if that process doesn't get in my way. For example, I always check the 'send anonymous usage statistics to the vendor' box when I install a program. I'm happy to help in making the software better, as long as it doesn't inconvenience me.
Granted, we may not be able to do this for every kind of feedback. But I feel there is a huge amount of data that we are missing out on because we just make it too hard for people to give it to us. We could be making our products and services a whole lot better... we just have to be a little creative in how we ask for feedback.
Now, when we ask someone to give us feedback, we're really asking them to do us a favor. We want to improve whatever it is we're asking for feedback on and thus by providing it, they are helping us. So if they are already going to this effort to help us, we really shouldn't ask more from them than necessary. In fact, we should make it as easy as possible for them to help us.
Probably the most common mechanism for providing feedback today is the survey. Someone buys a product, uses a service, attends a class, etc and then fills out a feedback survey about it.
We can see the same thing with software and websites. There is often a form or something that vendors use to gauge the user experience of their digital product. I posit that such a mechanism tries the patience and goodwill of users. It makes them have to go out of their way - do extra work - to do a favor for the vendor. So some vendors offer small rewards for filling out these surveys, in recognition of this fact.
But I think we can do better. Consider this juvenile, but brilliant, example:
Why do I say this is brilliant? It takes something the user has to do anyway, and turns it into a feedback mechanism. The user doesn't have to do any extra work or go out of his way at all. He just does his normal business (heh) and has no choice but to provide feedback in the process.
I just checked and apparently it's no longer there, but Skype used to be a good example of this. You would make a call (over the internet) and when the call was over, you would have to close the 'call' window. But the only way to close it was to click a button that gave feedback on the quality of the call. Now, you have to close that window anyway. So it's no extra work for me to provide feedback while I do it. It's not intrusive. It's convenient.
This is the kind of mechanism we should be using more with software. Users are using it anyway, why not build in ways to gather feedback that don't disrupt their workflow?
I, personally, am more than happy to provide feedback to software companies if that process doesn't get in my way. For example, I always check the 'send anonymous usage statistics to the vendor' box when I install a program. I'm happy to help in making the software better, as long as it doesn't inconvenience me.
Granted, we may not be able to do this for every kind of feedback. But I feel there is a huge amount of data that we are missing out on because we just make it too hard for people to give it to us. We could be making our products and services a whole lot better... we just have to be a little creative in how we ask for feedback.
Monday, September 30, 2013
I Love Limitations in Programming Languages
I'm not being sarcastic! I really do love them. Here's the thing:
Writing code is a very free-form exercise. Each developer has their own style, there are usually many ways of accomplishing the same task, and everyone has their own idea of what 'pretty' or 'clean' code looks like. The computer, of course, doesn't care. It just cares that, basically, there is some code. It finds this 'some code' and it happily compiles or interprets it.
Humans, however, care about more than that. At the most fundamental level, humans care about understanding what the code is doing. (At least, they should!) Now, what is easier to understand: one thing or 100 things? I hope you said one thing.
And this is exactly my point: If the language only provides one way of doing something, it makes it easier for every other developer to understand what the code is doing. If the language provides 100 ways of doing the same thing, well, then every developer has to know all 100 ways.
So some might look at that language that only provides one way as very limiting. "There's only one way! Psshaww!" But I look at that language and say "That's so easy to understand!" I love those kinds of limitations because they make things easier. And I love easy.
Now, if the designers of the language were clever, they will make it such that the one way happens to be the best way. In other words, they build the best practice into the language and don't allow you to (or at least, make it difficult for you to) deviate from it. In this case, I really love the limitations! Because when you learn the language, you're also learning the best practice by default! And it becomes very hard to do it the wrong way.
Let's pause for a quick example. I really enjoy the C# programming language for this reason. C#, like C++, is object-oriented. But unlike C++, everything in C# is an object. Is that a limitation? Yes: everything has to be an object. Does it help? Yes: you only need to know one way to treat everything (like an object).
Critics of C# would say that this business of everything being an object makes the code more verbose. And I agree. But you know what? I like verbosity too!
Recently, there has been a push for more concise programming languages. Quite frankly, I think this is misguided. Remember, a human's foremost concern should be understanding the code. Code will be read (and hopefully, understood) many more times than it will be written. So emphasis should be placed on ease of understanding, not ease of writing.
Another example. C# has a method called "ToString". Ruby has a method called "to_s". If you have never programmed in C# or Ruby before, which of those two method names do you think would be clearer?
"But," some may say, "if you are calling the method many times, those extra characters add up to a lot of extra development time!" This is wrong for several reasons.
The first reason is Intellisense or other code-completion tools. Rarely do you have to type every character in a language like C#. The IDE makes intelligent guesses as to what you are about to type and gives very accurate completion options.
The second reason is that the actual typing of your code should probably be a small portion of your development activity. Hopefully, you spend more time thinking and designing than you do typing.
The third reason is that, as mentioned above, code will be read more often than it is written. So speeding up the reading [ie. understanding] of code should take precedence. Granted, anyone experienced in the language will understand the built-in abbreviated method names. But those shortcuts create a sort of 'culture of obfuscation.' Developers will follow that same standard in code they write and then the whole code base becomes difficult to understand. True, a language like C# has a 'culture of long method names,' which may look silly. But silly or not, I don't have to guess at what they do. And that is what matters to me.
To sum it up, limitations make it hard[er] to write bad code. And limitations + verbosity makes your code easier to understand - both by yourself and others. So why the push for more free-form, concise languages? I really don't know. I will say that if a language can be more concise, without sacrificing understandability, that's great! But I do know that I'd rather read Java over Perl any day of the week.
Writing code is a very free-form exercise. Each developer has their own style, there are usually many ways of accomplishing the same task, and everyone has their own idea of what 'pretty' or 'clean' code looks like. The computer, of course, doesn't care. It just cares that, basically, there is some code. It finds this 'some code' and it happily compiles or interprets it.
Humans, however, care about more than that. At the most fundamental level, humans care about understanding what the code is doing. (At least, they should!) Now, what is easier to understand: one thing or 100 things? I hope you said one thing.
And this is exactly my point: If the language only provides one way of doing something, it makes it easier for every other developer to understand what the code is doing. If the language provides 100 ways of doing the same thing, well, then every developer has to know all 100 ways.
So some might look at that language that only provides one way as very limiting. "There's only one way! Psshaww!" But I look at that language and say "That's so easy to understand!" I love those kinds of limitations because they make things easier. And I love easy.
Now, if the designers of the language were clever, they will make it such that the one way happens to be the best way. In other words, they build the best practice into the language and don't allow you to (or at least, make it difficult for you to) deviate from it. In this case, I really love the limitations! Because when you learn the language, you're also learning the best practice by default! And it becomes very hard to do it the wrong way.
Let's pause for a quick example. I really enjoy the C# programming language for this reason. C#, like C++, is object-oriented. But unlike C++, everything in C# is an object. Is that a limitation? Yes: everything has to be an object. Does it help? Yes: you only need to know one way to treat everything (like an object).
Critics of C# would say that this business of everything being an object makes the code more verbose. And I agree. But you know what? I like verbosity too!
Recently, there has been a push for more concise programming languages. Quite frankly, I think this is misguided. Remember, a human's foremost concern should be understanding the code. Code will be read (and hopefully, understood) many more times than it will be written. So emphasis should be placed on ease of understanding, not ease of writing.
Another example. C# has a method called "ToString". Ruby has a method called "to_s". If you have never programmed in C# or Ruby before, which of those two method names do you think would be clearer?
"But," some may say, "if you are calling the method many times, those extra characters add up to a lot of extra development time!" This is wrong for several reasons.
The first reason is Intellisense or other code-completion tools. Rarely do you have to type every character in a language like C#. The IDE makes intelligent guesses as to what you are about to type and gives very accurate completion options.
The second reason is that the actual typing of your code should probably be a small portion of your development activity. Hopefully, you spend more time thinking and designing than you do typing.
The third reason is that, as mentioned above, code will be read more often than it is written. So speeding up the reading [ie. understanding] of code should take precedence. Granted, anyone experienced in the language will understand the built-in abbreviated method names. But those shortcuts create a sort of 'culture of obfuscation.' Developers will follow that same standard in code they write and then the whole code base becomes difficult to understand. True, a language like C# has a 'culture of long method names,' which may look silly. But silly or not, I don't have to guess at what they do. And that is what matters to me.
To sum it up, limitations make it hard[er] to write bad code. And limitations + verbosity makes your code easier to understand - both by yourself and others. So why the push for more free-form, concise languages? I really don't know. I will say that if a language can be more concise, without sacrificing understandability, that's great! But I do know that I'd rather read Java over Perl any day of the week.
Labels:
code,
concise,
limitations,
programming,
software engineering,
verbosity
Tuesday, July 16, 2013
Recognition Rather Than Recall
Have you ever forgotten to take your keys when you left the house? Or your wallet? Or your phone?
Let me guess why. I bet it was because they weren't in the place where you normally leave them. Right?
Because you didn't see them, you didn't think to take them. "Out of sight, out of mind," as they say.
So some people get in the habit of always putting these things in a place where they will see them, preferably as they walk out the door. Then they are in your sight, and thus, in your mind.
Software user interface designers call this principle Recognition Rather Than Recall. The basic idea is: don't make users have to remember to do (or how to do) something. Rather, provide them some cue that they will recognize to help them along.
http://www.nngroup.com/articles/ten-usability-heuristics/
For example, today basically all computer programs have a menu bar or toolbar somewhere. They are a constant reminder to the user that, "hey, you can save this!" or, "hey, you can make this text bold!" or, "I hope you don't want to close me, but you can click this red X here if you really want to." Yes, the user can recognize the button and know both that the operation exists and how to do it.
Compare that with, say, WordPerfect back in the old days. (And by 'old days', I mean the early 90s.) Way back then, WordPerfect was the premier word processing application. But it didn't have the fancy-shmancy toolbars of today. The user had to remember that the F5 key was save and F12 was quit. Actually, those are probably not the correct keys... but that just further illustrates how hard it is for the user to remember.
With the rise of touchscreen interfaces, I worry that we will backtrack here. A trendy feature of such interfaces is gesture-based commands. The problem is, if the user sees no visual cues that a command is available, he has to... ugh... remember the commands. How exhausting!
I faced a really glaring example of this just this week. I logged into a server running Windows Server 2012. I needed to find a particular program. But the normal Windows entry point to find programs, the Start button, was nowhere to be found. There were a few icons for some applications, all of which were useless to me at the time. I just needed to open this specific app.
Now, I'm a software engineer. I have been using Windows for almost 20 years. And here I was, unable to figure out how to open the application I wanted. The visual cues I am accustomed to had vanished. I had to do a Google search to find it. A Google search! It turns out, you have to move the mouse to the lower right of the screen and that opens the "Charms Bar." Then you can access the Start screen from there. But how is anyone supposed to know that?
But even when I got to the Start screen, I couldn't see all my programs. Again, there was no button or visual cue about how to access them. Again I had to consult of the wisdom of the Internet. It turns out that on the Start screen, you have to right click and then you can access your apps. Again, how is the user supposed to figure that out?
My point is, there is now nothing to recognize. No button. No visual cue of any kind. It all relies on recall. It's even worse when you've never done it before... as in my case with Windows Server 2012. How can you recall what you never knew in the first place?
Let me guess why. I bet it was because they weren't in the place where you normally leave them. Right?
Because you didn't see them, you didn't think to take them. "Out of sight, out of mind," as they say.
So some people get in the habit of always putting these things in a place where they will see them, preferably as they walk out the door. Then they are in your sight, and thus, in your mind.
Software user interface designers call this principle Recognition Rather Than Recall. The basic idea is: don't make users have to remember to do (or how to do) something. Rather, provide them some cue that they will recognize to help them along.
http://www.nngroup.com/articles/ten-usability-heuristics/
For example, today basically all computer programs have a menu bar or toolbar somewhere. They are a constant reminder to the user that, "hey, you can save this!" or, "hey, you can make this text bold!" or, "I hope you don't want to close me, but you can click this red X here if you really want to." Yes, the user can recognize the button and know both that the operation exists and how to do it.
Compare that with, say, WordPerfect back in the old days. (And by 'old days', I mean the early 90s.) Way back then, WordPerfect was the premier word processing application. But it didn't have the fancy-shmancy toolbars of today. The user had to remember that the F5 key was save and F12 was quit. Actually, those are probably not the correct keys... but that just further illustrates how hard it is for the user to remember.
With the rise of touchscreen interfaces, I worry that we will backtrack here. A trendy feature of such interfaces is gesture-based commands. The problem is, if the user sees no visual cues that a command is available, he has to... ugh... remember the commands. How exhausting!
I faced a really glaring example of this just this week. I logged into a server running Windows Server 2012. I needed to find a particular program. But the normal Windows entry point to find programs, the Start button, was nowhere to be found. There were a few icons for some applications, all of which were useless to me at the time. I just needed to open this specific app.
Now, I'm a software engineer. I have been using Windows for almost 20 years. And here I was, unable to figure out how to open the application I wanted. The visual cues I am accustomed to had vanished. I had to do a Google search to find it. A Google search! It turns out, you have to move the mouse to the lower right of the screen and that opens the "Charms Bar." Then you can access the Start screen from there. But how is anyone supposed to know that?
But even when I got to the Start screen, I couldn't see all my programs. Again, there was no button or visual cue about how to access them. Again I had to consult of the wisdom of the Internet. It turns out that on the Start screen, you have to right click and then you can access your apps. Again, how is the user supposed to figure that out?
My point is, there is now nothing to recognize. No button. No visual cue of any kind. It all relies on recall. It's even worse when you've never done it before... as in my case with Windows Server 2012. How can you recall what you never knew in the first place?
Labels:
psychology,
recall,
recognition,
software engineering,
usability
Saturday, February 5, 2011
Broken Windows
Broken Window Theory is an extremely powerful concept. Here's a short article about it:
http://en.wikipedia.org/wiki/Broken_windows_theory
And here is the original article that started it all:
http://www.theatlantic.com/magazine/archive/1982/03/broken-windows/4465/
It was first developed with respect to criminology, but it has very far-reaching implications. Malcom Gladwell expounds on the sociology of it in The Tipping Point. And Steve McConnell applies it to Software Engineering in Code Complete, mentioned last time.
Why is it so powerful? If you read the above Wikipedia article I'm sure you can see why: small changes to an environment invoke large changes in the minds of the people therein.
I don't want to repeat what the article says (please do read it) but in short, here's the point:
An investigation of urban areas was performed to determine what transforms a good neighborhood into a bad neighborhood. The answer: broken windows.
When people see broken windows in buildings, it sends them a signal that nobody cares about that area. So vandals come and break more windows, in addition to other types of destruction. This is now a stronger signal that nobody cares and that disorder rules. So crime escalates. It is a self-perpetuating downward spiral. And it all starts from broken windows.
The solution is clear: fix broken windows. New York City tried exactly that starting in 1985. It first applied the principle to the subway system and then the city in general. The results were striking. Changing this small signal drastically improved crime rates.
Again, you can see the import of this. The broken window is simply a metaphor. What matters is the signal. If we want people to follow some norm, we need to ensure signals are in place to encourage such behavior. Perhaps even more importantly, we need to remove signals that encourage the opposite behavior.
Here's a really simple example. Let's say you have some roommates and you want them to keep the place clean. If they see "broken windows" (again, a metaphor here: perhaps a few unwashed dishes, or a dirty floor) what signal are you sending? That you don't care about cleanliness! Which is the opposite signal that you want to send.
So anytime we want to encourage a certain social norm, we need to fix broken windows. They may seem like little things, but that is exactly the point! The little things are always noticed, at least on some level.
After learning about this, I started to see broken windows everywhere (especially in code we were writing). It is worth the time investment to fix them.
http://en.wikipedia.org/wiki/Broken_windows_theory
And here is the original article that started it all:
http://www.theatlantic.com/magazine/archive/1982/03/broken-windows/4465/
It was first developed with respect to criminology, but it has very far-reaching implications. Malcom Gladwell expounds on the sociology of it in The Tipping Point. And Steve McConnell applies it to Software Engineering in Code Complete, mentioned last time.
Why is it so powerful? If you read the above Wikipedia article I'm sure you can see why: small changes to an environment invoke large changes in the minds of the people therein.
I don't want to repeat what the article says (please do read it) but in short, here's the point:
An investigation of urban areas was performed to determine what transforms a good neighborhood into a bad neighborhood. The answer: broken windows.
When people see broken windows in buildings, it sends them a signal that nobody cares about that area. So vandals come and break more windows, in addition to other types of destruction. This is now a stronger signal that nobody cares and that disorder rules. So crime escalates. It is a self-perpetuating downward spiral. And it all starts from broken windows.
The solution is clear: fix broken windows. New York City tried exactly that starting in 1985. It first applied the principle to the subway system and then the city in general. The results were striking. Changing this small signal drastically improved crime rates.
Again, you can see the import of this. The broken window is simply a metaphor. What matters is the signal. If we want people to follow some norm, we need to ensure signals are in place to encourage such behavior. Perhaps even more importantly, we need to remove signals that encourage the opposite behavior.
Here's a really simple example. Let's say you have some roommates and you want them to keep the place clean. If they see "broken windows" (again, a metaphor here: perhaps a few unwashed dishes, or a dirty floor) what signal are you sending? That you don't care about cleanliness! Which is the opposite signal that you want to send.
So anytime we want to encourage a certain social norm, we need to fix broken windows. They may seem like little things, but that is exactly the point! The little things are always noticed, at least on some level.
After learning about this, I started to see broken windows everywhere (especially in code we were writing). It is worth the time investment to fix them.
Thursday, January 20, 2011
The Cost of Fixing Defects
Steve McConnell wrote a great book about Software Engineering called Code Complete. If you're not a Software Engineer, I wouldn't recommend it. But if you are, read it now!
But there are 2 really excellent points in the book, even if you are not into Computer Science (really!). The first is the subject of this post: the exponentially increasing cost of fixing defects as you progress into the development cycle. It looks something like this:
But there are 2 really excellent points in the book, even if you are not into Computer Science (really!). The first is the subject of this post: the exponentially increasing cost of fixing defects as you progress into the development cycle. It looks something like this:
Along the bottom are the various phases of developing a piece of software. The far left signifies the early stages and the far right, the stages after deployment.
The chart above actually leaves out a preliminary stage that I always start with: the problem statement. You can't solve a problem unless you know what it is. This fundamental truth is all too often forgotten as people rush into a "solution."
The idea is that if you catch a problem in the first stage, which is simply the conceptual defining of the software, it costs 150 times less to fix than if you wait until the final stages. 150 times!
The idea makes intuitive sense. What is surpising in the scale of the difference.
Now, I promised this would be interesting outside the scope of Software Engineering. I think you can see how:
This same principle applies all around us. Often we rush forward into something, only to realize a flaw in the early stages of the venture that is now costly (or difficult, or painful, or annoying... or impossible) to fix. Or if the flaw was in the very definition of the problem, we have now wasted a tremendous amount of energy pursuing a "solution" to the wrong problem.
I have most often observed this in discussions that turn into arguments. A simple misunderstanding of a term or idea initiates a useless argument. One party says something and means one thing. The other party thinks he meant something else, and off we go! Hopefully the one with a cool head eventually realizes they are talking about different things and the whole thing is passed off as an exercise in futility, getting nowhere.
So let's now apply our SE principle. If, at the outset of this discussion, one could actually take a step back and define the nature of the disagreement, likely they would realize it was a simple misunderstanding and no argument would take place.
How do we do this? The Bible writer James tells us to be "swift about hearing, slow about speaking." Oh, if we could only apply that! Instead of being so quick to support our case, why not ask a question to clarify the other side? And then listen to the answer!
I can't tell you how many times this has aided me at my former job and even now in casual discussions. So often we find that there is really no disagreement at all. Or, equally importantly, that the disagreement is completely different than we initially understood it to be. In either case, we save ourselves a huge amount of effort and time.
Again, we cannot solve a problem if we do not know what it is.
Now, I mentioned there were 2 great points from this book. We just talked about the first. The second is Broken Windows. I will save that for another time.
Subscribe to:
Posts (Atom)