Thursday, December 12, 2013

You Can't Win an Argument

I hate arguing.  Actually, I feel that if a discussion gets to the point of becoming an argument, I've already lost.  Or, more accurately, both sides have already lost.

I could never quite articulate why I felt that way, but now I think I can.  And I have Dale Carnegie to thank for that:

Nine times out of ten, an argument ends with each of the contestants more firmly convinced than ever that he is absolutely right.  

You can't win an argument.  You can't because if you lose it, you lose it; and if you win it, you lose it.  Why?  Well, supposed you triumph over the other man and shoot his argument full of holes... Then what?  You will feel fine.  But what about him?  You have made him feel inferior.  You have hurt his pride.  He will resent your triumph.  And- 

A man convinced against his will 
Is of the same opinion still.

Ben Franklin illustrates it with this trade-off:

If you argue and rankle and contradict, you may achieve a victory sometimes; but it will be an empty victory because you will never get your opponent's good will.

So figure it out for yourself.  Which would you rather have: an academic, theatrical victory or a person's good will?  You can seldom have both.

Lincoln chimes in with what to do instead of arguing:

No man who is resolved to make the most of himself can spare time for personal contention, still less can he afford to take the consequences, including the vitiation of his temper and the loss of self control.  Yield to larger things to which you show no more than equal rights, and yield to lesser ones though clearly your own.  Better give your path to a dog, than be bitten by him in contesting for the right.  Not even killing the dog will cure the bite.

As Dale says, will proving someone wrong make him want to agree with you?  Of course not!  You just made him hurt and angry.  In other words, you've evoked emotions.  Negative ones.  And once those are in play, all the reason and logic in the world won't do any good.

Monday, November 18, 2013

Don't Roll Your Eyes

"[John] Gottman has proven something remarkable.  If he analyzes an hour of a husband and wife talking, he can predict with 95 percent accuracy whether that couple will still be married fifteen years later."  - Malcom Gladwell, Blink.

How does he do it?  By looking at the microexpressions on their faces while they have meaningful communication.  Each expression conveys an emotion or attitude.  The "Four Horsemen" of negative emotion are: defensiveness, stonewalling, criticism, and contempt.  But within those four, the king is contempt.  Contempt is the single strongest indication that a marriage is in trouble.

A disagreement, even criticism, can be discussed rationally and worked through.  But contempt is putting the other person on a lower plane than you.  It is immediately discounting what they are saying for no other reason than because you feel superior to them.  That is detrimental.

So what is the facial expression that indicates contempt?  You guessed it: eye-rolling.





  

Sunday, November 10, 2013

How to Ask for Feedback... or Any Favor

I don't remember much about 7th grade. But I do remember one time when I asked some of my classmates to help me fold sheets of paper to be passed to the rest of the class. I offered [what I considered] a tip as to how to do it more efficiently.  But the response I got was, "When you ask someone do to you a favor, don't then ask them to do it faster."

Now, when we ask someone to give us feedback, we're really asking them to do us a favor. We want to improve whatever it is we're asking for feedback on and thus by providing it, they are helping us. So if they are already going to this effort to help us, we really shouldn't ask more from them than necessary. In fact, we should make it as easy as possible for them to help us.

Probably the most common mechanism for providing feedback today is the survey. Someone buys a product, uses a service, attends a class, etc and then fills out a feedback survey about it.

We can see the same thing with software and websites. There is often a form or something that vendors use to gauge the user experience of their digital product. I posit that such a mechanism tries the patience and goodwill of users. It makes them have to go out of their way - do extra work - to do a favor for the vendor. So some vendors offer small rewards for filling out these surveys, in recognition of this fact.

But I think we can do better. Consider this juvenile, but brilliant, example:


Why do I say this is brilliant? It takes something the user has to do anyway, and turns it into a feedback mechanism. The user doesn't have to do any extra work or go out of his way at all. He just does his normal business (heh) and has no choice but to provide feedback in the process. 

I just checked and apparently it's no longer there, but Skype used to be a good example of this. You would make a call (over the internet) and when the call was over, you would have to close the 'call' window. But the only way to close it was to click a button that gave feedback on the quality of the call. Now, you have to close that window anyway. So it's no extra work for me to provide feedback while I do it. It's not intrusive. It's convenient.

This is the kind of mechanism we should be using more with software. Users are using it anyway, why not build in ways to gather feedback that don't disrupt their workflow?

I, personally, am more than happy to provide feedback to software companies if that process doesn't get in my way. For example, I always check the 'send anonymous usage statistics to the vendor' box when I install a program. I'm happy to help in making the software better, as long as it doesn't inconvenience me.

Granted, we may not be able to do this for every kind of feedback. But I feel there is a huge amount of data that we are missing out on because we just make it too hard for people to give it to us. We could be making our products and services a whole lot better... we just have to be a little creative in how we ask for feedback.

Friday, October 18, 2013

Goals Are for Losers

Scott Adams, creator of Dilbert, discusses why systems are better than goals.

Hint: it's because systems allow for - and in fact, are improved by - failure.

http://online.wsj.com/news/articles/SB10001424052702304626104579121813075903866

Monday, September 30, 2013

I Love Limitations in Programming Languages

I'm not being sarcastic! I really do love them. Here's the thing:

Writing code is a very free-form exercise. Each developer has their own style, there are usually many ways of accomplishing the same task, and everyone has their own idea of what 'pretty' or 'clean' code looks like. The computer, of course, doesn't care. It just cares that, basically, there is some code. It finds this 'some code' and it happily compiles or interprets it.

Humans, however, care about more than that. At the most fundamental level, humans care about understanding what the code is doing. (At least, they should!) Now, what is easier to understand: one thing or 100 things? I hope you said one thing.

And this is exactly my point: If the language only provides one way of doing something, it makes it easier for every other developer to understand what the code is doing. If the language provides 100 ways of doing the same thing, well, then every developer has to know all 100 ways.

So some might look at that language that only provides one way as very limiting. "There's only one way! Psshaww!" But I look at that language and say "That's so easy to understand!" I love those kinds of limitations because they make things easier. And I love easy.

Now, if the designers of the language were clever, they will make it such that the one way happens to be the best way. In other words, they build the best practice into the language and don't allow you to (or at least, make it difficult for you to) deviate from it. In this case, I really love the limitations! Because when you learn the language, you're also learning the best practice by default! And it becomes very hard to do it the wrong way.

Let's pause for a quick example. I really enjoy the C# programming language for this reason.  C#, like C++, is object-oriented. But unlike C++, everything in C# is an object. Is that a limitation? Yes: everything has to be an object. Does it help? Yes: you only need to know one way to treat everything (like an object).

Critics of C# would say that this business of everything being an object makes the code more verbose. And I agree. But you know what? I like verbosity too!

Recently, there has been a push for more concise programming languages. Quite frankly, I think this is misguided. Remember, a human's foremost concern should be understanding the code. Code will be read (and hopefully, understood) many more times than it will be written. So emphasis should be placed on ease of understanding, not ease of writing.

Another example. C# has a method called "ToString". Ruby has a method called "to_s". If you have never programmed in C# or Ruby before, which of those two method names do you think would be clearer?

"But," some may say, "if you are calling the method many times, those extra characters add up to a lot of extra development time!" This is wrong for several reasons.

The first reason is Intellisense or other code-completion tools. Rarely do you have to type every character in a language like C#. The IDE makes intelligent guesses as to what you are about to type and gives very accurate completion options.

The second reason is that the actual typing of your code should probably be a small portion of your development activity. Hopefully, you spend more time thinking and designing than you do typing.

The third reason is that, as mentioned above, code will be read more often than it is written. So speeding up the reading [ie. understanding] of code should take precedence. Granted, anyone experienced in the language will understand the built-in abbreviated method names. But those shortcuts create a sort of 'culture of obfuscation.' Developers will follow that same standard in code they write and then the whole code base becomes difficult to understand. True, a language like C# has a 'culture of long method names,' which may look silly. But silly or not, I don't have to guess at what they do. And that is what matters to me.

To sum it up, limitations make it hard[er] to write bad code. And limitations + verbosity makes your code easier to understand - both by yourself and others. So why the push for more free-form, concise languages? I really don't know. I will say that if a language can be more concise, without sacrificing understandability, that's great! But I do know that I'd rather read Java over Perl any day of the week.


Tuesday, July 16, 2013

Recognition Rather Than Recall

Have you ever forgotten to take your keys when you left the house?  Or your wallet?  Or your phone?

Let me guess why.  I bet it was because they weren't in the place where you normally leave them.  Right?

Because you didn't see them, you didn't think to take them.  "Out of sight, out of mind," as they say.

So some people get in the habit of always putting these things in a place where they will see them, preferably as they walk out the door.  Then they are in your sight, and thus, in your mind.

Software user interface designers call this principle Recognition Rather Than Recall.  The basic idea is: don't make users have to remember to do (or how to do) something.  Rather, provide them some cue that they will recognize to help them along.  

http://www.nngroup.com/articles/ten-usability-heuristics/

For example, today basically all computer programs have a menu bar or toolbar somewhere.  They are a constant reminder to the user that, "hey, you can save this!" or, "hey, you can make this text bold!" or, "I hope you don't want to close me, but you can click this red X here if you really want to."  Yes, the user can recognize the button and know both that the operation exists and how to do it.

Compare that with, say, WordPerfect back in the old days.  (And by 'old days', I mean the early 90s.)  Way back then, WordPerfect was the premier word processing application.  But it didn't have the fancy-shmancy toolbars of today.  The user had to remember that the F5 key was save and F12 was quit.  Actually, those are probably not the correct keys... but that just further illustrates how hard it is for the user to remember.  

With the rise of touchscreen interfaces, I worry that we will backtrack here.  A trendy feature of such interfaces is gesture-based commands.  The problem is, if the user sees no visual cues that a command is available, he has to... ugh... remember the commands.  How exhausting!

I faced a really glaring example of this just this week.  I logged into a server running Windows Server 2012.  I needed to find a particular program.  But the normal Windows entry point to find programs, the Start button, was nowhere to be found.  There were a few icons for some applications, all of which were useless to me at the time.  I just needed to open this specific app.

Now, I'm a software engineer.  I have been using Windows for almost 20 years.  And here I was, unable to figure out how to open the application I wanted.  The visual cues I am accustomed to had vanished.  I had to do a Google search to find it.  A Google search!  It turns out, you have to move the mouse to the lower right of the screen and that opens the "Charms Bar."  Then you can access the Start screen from there.  But how is anyone supposed to know that?

But even when I got to the Start screen, I couldn't see all my programs.  Again, there was no button or visual cue about how to access them.  Again I had to consult of the wisdom of the Internet.  It turns out that on the Start screen, you have to right click and then you can access your apps.  Again, how is the user supposed to figure that out?

My point is, there is now nothing to recognize.  No button.  No visual cue of any kind.  It all relies on recall.  It's even worse when you've never done it before... as in my case with Windows Server 2012.  How can you recall what you never knew in the first place?  

Monday, July 8, 2013

Excellence Answers to No One

I admit: I am mediocre.  But I like to think about excellence and individuals who have reached some level of excellence.  There are, of course, many ways to approach this, but one thing that keeps running through my head is: excellence cannot exist under the direction of mediocrity.

This may seem obvious.  Maybe it is.  In any case, here's why I'm thinking it:

Let's say you own a company and you want to build a new headquarters.  Now, presumably, you have some idea how you would like the building to look, but you're no architect.  So you hire an architect.  He, presumably, has some experience designing buildings.  The question is: who designs the building?  You or the architect?

Really, that depends on how you perceive your abilities vs his.  If you perceive yourself as having an excellent eye for building design, you will give instructions to the architect and he will basically follow them.  After all, you're paying him.  However, if you perceive yourself as mediocre (or worse) in this regard, you will defer to his experience because you perceive him as excellent, at least in comparison to yourself.

Now, which is the better course?  I think it's safe to say the latter.  After all, he is the architect.  Sure, you give him some themes you'd like to see.  But then you let him work his excellence.  The building gets built, and it is admired by all passers-by.

Of course, this latter approach depends on two very critical things: the architect should actually be excellent, and you should recognize your mediocrity in this respect.

Granted, this is a contrived example.  But, I daresay, this principle affects the rise of companies today.  There are at least two models of product design that I can think of: designing products based on input from users (or stakeholders, financiers, etc) and designing products based on what you think is cool.  

If you have mediocre product designers, definitely go with the former.  But if you have excellent product designers, you better go with the latter.

Most companies and/or products fall into the first bucket.  They need to build their product to satisfy their customers, so they do whatever they can to meet the customers' needs.  By definition, most of anything is mediocre.  So for the general case, this is a good model to follow.

But the companies that fall into the second bucket are the game-changing companies.  Google, Apple, Tesla.  Did Google do market research when they decided to build Loon?  Self-driving cars?  Gmail?  No!  They made those products because they thought they were cool.  Likewise with the iPod, iPhone, and the Model S.  People who are excellent in their field built something they thought was cool.  More often than not, other people then also thought those things were cool.

If Apple had built the iPod based on feedback about music players of the day, it would simply have been a CD-player with some more bells and whistles.  People mediocre in the field wouldn't have had the creativity or technical background to envision something game-changing.  But the excellent people did!

Now, what would have happened if their boss (or stakeholder, or financier) didn't recognize the genius of their vision - if they were mediocre?  They may have squashed the project.  The best case scenario would have been the forgoing of huge profits.  The worse case would have been those excellent people quitting, starting their own competing company, and putting Apple out of business.

So I guess what I'm saying is that if you want your company to do something ground-breaking, hire excellent people and then get out of their way.

And now, the giant caveat.  What happens when mediocre people think that they are excellent?  That's clearly the worst of both worlds.  They would refuse to listen to input from those who know more even though they don't have the skills to justify such hubris.  Those are the people you fire.

The last question, then, is: how does one recognize excellence, especially if he is mediocre himself?  That's a tough one.  I may not be able to recognize excellence, but I can recognize if someone is better than me.  Those are the people who I hire.  And I tell them to hire people better than them.  After enough such iterations, hopefully we can hit that bar.

Friday, June 7, 2013

Your Mind Makes It Real

The Placebo Effect.  The Nocebo Effect.  Priming.  Um...  The Matrix.

Your mind has an uncanny ability to make perceived things real.  I used to think this was incredibly impressive.  But I didn't know the half of it.

Check out this experiment:

http://pds15.egloos.com/pds/200910/18/78/Gaining_strength.pdf

If you're like me and are too lazy to read it, here's the gist:

Some researchers in Cleveland wanted to determine if mental practice of an exercise could actually result in physical changes to the targeted areas of the body.  One group of subjects did a regular exercise involving moving their finger sideways.  A second group regularly imagined doing the same exercise but did not actually go through the physical motions.  The control group did nothing.

After 12 weeks, the 'actual physical exercise' group showed a finger strength increase of 53%.  The control group did not show any strength increase.  Now, the fun part: the 'mental exercise' group showed a strength increase of 35%!

That's right: the group that didn't perform any physical exercise increased their physical strength by imagining they were exercising.  "They didn't have to lift a finger in order to convince their brains that they were, in fact, lifting a finger."

When I first read this, I thought it must have been some mistake.  But the same results were obtained by a separate experiment done in Canada in 2007:

http://westallen.typepad.com/brains_on_purpose/files/mind_over_matter_shackell_07.pdf

In this case, the experiment involved hip muscles.  They used the same 3 types of groups.  The 'actual physical exercise' group increased their strength by 28%.  The control group, none.  The 'mental exercise' group... wait for it... 24%!

I don't know about you, but I am blown away by this.  And I'm sure I will continue to be blown away as we uncover more amazing things our minds can do.

Friday, April 5, 2013

Prototypical Remembrance

Your life is not as you remember it.  Neither is mine.

It's bold claim to make, I agree.  So let me elaborate:

Think about your next vacation.  How much money are you willing to spend on it?  Now, how much would you be willing to spend if, at the end of the vacation, all pictures were deleted, all souvenirs taken, and even all memories erased?  In other words, how much would you spend if you knew that you would not remember anything about the vacation?

Think about it.

I'd wager you would spend a lot less money in the no-memory case... maybe nothing at all!

But why?  If we still get to enjoy the vacation, why should it make a difference if we remember it?  Therein lies the rub!

Experiments have shown that we essentially have two selves: the experiencing self and the remembering self.  The former lives in the moment and the latter looks back and evaluates.  But the most interesting part is that the two usually disagree!   

But how could they disagree?  I mean, we are the same person in both instances.  The trick lies in how we remember.

It turns out we don't store every moment in our memory.  Instead, we store prototypes of events.  For example, think about your commute to work.  You likely don't remember every moment of every days' commute.  What you do remember is a prototypical, average commute.  If something was different today ("oh look, a Starbucks is opening") it stands out because it is different from the prototype.  So now your mind stores the prototype and the new exceptional case.  So the memory of all of our commutes is essentially a file of a prototype plus exceptional cases.      

Let's take it a step further.  There are many moments in a commute.  How does our mind determine what should make up the prototype?  If it picks the wrong moments to represent the entire event, we will have a discrepancy between the experiencing self and the remembering self.  As it turns out, this is exactly what happens.

One great experiment involved asking subjects to put their hands in bowls of cold water.  At frequent intervals, the conductor of the experiment would ask them to rate the discomfort they felt in their hands (the 'experienced' rating).  At the end, after they removed their hands, they were asked to rate how negative the overall experience was (the 'remembered' rating).  I ruined the surprise already, but: the two types of ratings almost always disagreed!

If you look at the results of this experiment, you will find that the 'remembered' rating is about equal to the average of the peak 'experienced' rating and the last 'experienced' rating.  All other 'experienced' ratings are ignored.  Interestingly, the total time of the negative event is also ignored.

From a purely mathematical standpoint, one would expect that the overall 'remembered' rating should be the integral (the area under the curve) of all of the 'experienced' ratings.  But our remembering self uses the above peak-end calculation instead.

So, the way we remember our lives is not the way we have actually experienced them.

For a much more detail explanation, please do read Thinking, Fast and Slow by Daniel Kahneman.

http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374275637

Monday, March 4, 2013

Another Step Foward for the Knowledge Stream

Several months ago I posted about what I call the Knowledge Stream: essentially, a network of neural implants that we could use to share the world's knowledge in real-time just by thinking about it.

Well, Brown University has made some great progress with devices implanted into the motor cortex:

http://www.extremetech.com/extreme/149879-brown-university-creates-first-wireless-implanted-brain-computer-interface

Granted, it's still a far cry from sharing information, but the pieces are coming together...


Friday, January 11, 2013

Why Negotiations Are So Hard

Say I offer you a 50/50 chance to either win $200 or lose $150.  Would you take that chance?

Most people wouldn't.

But, from a purely mathematical standpoint, it is a good bet.  That chance has an overall value of $25 (.5*200 - .5*150).  Many experiments have been done on such gambles and the result is that people only start to take the chance when the gain is about double the loss (win $300 or lose $150).

Now, a purely rational agent would take any chance that has a value greater than 0.  So these experiments have shown that in the case of these gambles, people are not acting as rational agents.  Indeed, the conclusion is that the emotional impact of a loss is about twice that of a gain of the same amount.  

If you think back to your own experiences with gains and losses, you may recall feeling the same way.  But the implications are very interesting.

This is what makes negotiations hard.  A negotiation is, basically, one side giving up something in exchange for the other side giving up something.  But if we value losses twice as much as gains (and the other side is doing the same thing), we can understand why it's so hard to come to an agreement:

We think our giving up X has a value of $1000, but the other side only sees it as a gain of $500.  Making it worse, because it's a loss worth $1000 to us, we ask them to give up something that we think has a value of $1000, but they see it as a loss of $2000.  

The psychology of this can be seen everywhere.  In battles, the defending army fights harder than the invading army.  You can even see this in gas prices.  Back when there were discussions as to whether to allow cash and credit gas prices to be different, the credit card companies said that if there was a difference, it should be called a cash discount rather than a credit surcharge.  The reason is that people would rather forego a discount (gain) than pay a surcharge (loss).  The numbers are the same, and to a rational agent, it would be equivalent.  But to people, it makes a difference.

This is a clear case in which our emotions cloud our better judgement... as usual.  So what can we do about it?

Actually, it's not so hard to fix our thinking.  Let's go back to the gamble above.  What if, rather than giving you one chance at winning $200 or losing $150, I gave you a hundred chances?  You would likely quickly reason that the overall odds would be in your favor and its very likely you would end up with a winning (greater than 0) amount.  But, really, it's the same gamble - just repeated a hundred times in the latter case.

In other words, taking a broader view helped you realize that is a good bet.  In the grand scheme of things, the few losses here and there would be outweighed by the wins.  When faced with any decision like this, we can apply the same logic.

The negotiation situation is not terribly different.  If we can take a broad view and look holistically at everything we will be gaining and losing during the negotiation, we realize that a single loss doesn't hurt so bad.    

This is just one of the themes discussed by Daniel Kahneman in his excellent, if dry, tome: Thinking, Fast and Slow:

http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374275637