How Mobile has Hijacked Human Nature

We live in abundance; why does our attention feel so scarce?

Our biology hasn’t caught up to our technology. Today, we live in a time of abundance — abundance of information, content, and connectivity. Yet our time and attention

 has never felt more scarce — or scattered. How we manage the interplay between these dynamics is critical to our future yet completely unresolved. We are in uncharted territory.

Falling down.

The age-old expression “There’s never a dull moment” has never been more true than it is today.

Just look around at the shoppers in line at the store, the drivers at the traffic light, your co-workers at the happy hour, or even — increasingly and frustratingly — your family at the dinner table. What do you see? Downward gazes toward mobile devices. Fingers on glass clearing notifications, swiping feeds, and dealing with something (ostensibly) important.

The smartphone is the escape portal in our pocket that promises transportation to another place. Distraction. Through our phone’s vibrations come whispers of entertainment, work, messages, whatever. All impatiently wait for our attention.

So long as you have a signal and a decent charge no amount of time is too small. In the minutes spent waiting, stuck in traffic jams, or walking to meetings; the seconds between bites of food, at periodic pauses in conversation, or lulls in the Netflix shows we stream — we get little chances to escape.

What once was merely marginal time — downtime between more important activities — has become significant, unlocked by our smartphones.

Where does the time go? Spinning my wheels.

Estimates suggest we are checking our Mobile devices some 110–150 times a day.¹ All those moments add up. Mary Meeker’s 2017 Internet Trends presentation shared that time spent with Mobile has increased from about an hour a day in 2011 to going on 6 hours a day in 2016:

Whether we’ve hit the upper limit in time spent with Mobile or not is yet to be seen (Though there is only so much time in a day). To that end, a recent TED talk given by NYU professor of psychology Adam Alter shared research on the average 24-hour workday, focusing on how time was spent across four categories: Sleep, Work+Commute, Survival, and Personal. You can see the findings below. Note that within “Personal” is a red bar that represents screentime.

Alter’s findings show that from 2007 to 2017 Personal time has been consumed by screentime. It’s hard to grasp the gravity of such a colossal shift in human behavior — one that occurred in a mere ten years, yet self-reflection bears it out. Personal time —our leisure time — feels like it has evaporated. Why?

Standing still — it’s like running on ice.

The value of time is highly subjective to the person experiencing its passing. Consider how someone with nowhere to be might not mind a minor delay at a red light. Contrast that same person with someone trying to make it to the airport in time to catch a flight — this person will find waiting at that red light much more painful.

Before the Internet — and especially before smartphones — having meaningful downtime was normal. In your downtime you could turn on the television, read a book, spend time with a friend (or family), or just sit there and do nothing. You could very easily get bored.

But the Internet, as made ubiquitous through our Mobile devices, has connected us to endless opportunities to distract ourselves, do work, do something. While in our connected era, you can still do all of the same things as before, doing something as simple as doing nothing has become downright hard.

Louis C.K. nailed this idea on Conan almost four years ago (0:55s).

You need to build an ability to just be yourself and not be doing something. That’s what the phones are taking away. Is the ability to just sit there like this. That’s being a person, right? — Louis C.K. on Conan

Our constant connectivity made possible by Mobile has made it cheap to always be doing something. By extension it’s raised the price of doing nothing. Doing something on Mobile now costs us only our time and attention. Meanwhile, the price of doing nothing requires willfully ignoring all the possible stuff we could be doing on our Mobile devices.

I only get a little attention when I fall.

All of the stuff we can do on Mobile comes down to content creation and consumption. The device in our pocket gives us the opportunity broadcast or to hunt and devour.

The game is plentiful. There has never been more content to consume than there is today. Whether it’s professional content — the kind of stuff you might pay a premium for like a movie, book, streaming video service, music — or amateur content, which is everything that is shared on social media, our lives are inundated with content.

The abundant sources of streaming music.

Mobile devices are the ultimate content creation devices — sharing is creation. A short three years ago Mary Meeker estimate that 1.8 billion photos were being uploaded every dayYouTube has over one billion hours of video watched daily. Everyone you know on Facebook, Instagram, and Snapchat is their own little micro-celebrity (Twitter, too).

Creation and consumption of amateur content over social media.

Much of this content amounts to ephemeral noise, but there are enough gems to keep us swiping and searching.

Benedict Evans has written about how our phones are the only device that matters today. It’s the remote control for the world and the Internet is the network. Through Mobile we have access to all the content we could want. Whereas in time’s past distribution channels mattered a great deal (e.g. network TV, cable services, book publishers, music labels, etc.), these proprietary channels matter very little in the age of the Internet.

All of this content clamors for our time and attention. Shouldn’t we be happy to live in a time of such abundance?

Why is it when we look around at our friends and family people seem so distracted, so anxious?

What’s wrong with us?


We aren’t wired for this.

If all of our time can be spent doing something over the Internet and there is a never-ending, growing list of things to do — be it content to create or consume, how might we expect humans to behave?

Mobile connectivity has put a price on our marginal time and that price can be understood by turning to the economic concept of “opportunity cost.”

Opportunity cost is the hypothetical future you give up when you make a choice. It’s the imagined missed opportunities. When we think about what we gave up to do something we are filled with discomfort and anxiety — a sense of loss for some potential future that is now gone forever.

Opportunity cost the fear of missing out.

FOMO.

Consider this situation. Imagine you’re missing work because of a dentist appointment. As you wait to be called, you see a magazine you could pick up and read. Or you could open your Mobile phone, check your work email, and knock off a couple to-dos. If you choose the magazine (or Facebook, Instagram, etc.), the opportunity cost is the foregone stuff you could have done on your Mobile device.

Here’s another one. You spend 20 minutes scanning Netflix for something to watch. You want to pick something you’ll enjoy, but struggle to sift through the hundreds of options available. You know there has to be something worth watching and that knowledge creates anxiety because, well, you don’t have to watch anything, at all. You better make a good choice — the best choice. Otherwise you’re giving up some other use of your time.

You finally settle on a movie that has decent reviews and you sorta want to watch — only to find that 15 minutes into it, your mind drifts to that possibility engine within arms reach. Hmm … maybe this movie isn’t worth watching, after all.²

Here, the opportunity cost isn’t just the other streaming options you skipped, it’s also the other entertainment options offered up by your Mobile device. Is it a surprise that 70% of streaming, live, or DVR TV watching during Primetime involves other activities³.

Loss aversion and human nature.

If you’ve read Daniel Kahneman’s Thinking Fast and Slow you’ll recall his discussion of Loss Aversion and Prospect Theory.

A quick summary. At the margins, we are willing to pay a premium to eliminate a small chance of a big loss — the example du jour is settling a frivolous lawsuit. Even if you have a low probability of losing, the small chance nags at us. We’re willing to pay more than would make logical sense (were we rational actors) to eliminate that small chance.

On the other end of the spectrum, we are willing to pay an overly large premium for a small chance to have a big gain. The relatable example? Playing the lottery. You have almost no chance of winning yet you pay a price for a ticket that is significantly greater than the value of the ticket because “You can’t win if you don’t play.”

The small chance to win big helps understand why people play slot machines.

Both of these behaviors at the margin help explain why we have anxiety about our choices in consuming content — there’s a small chance that the content we choose won’t be worth our time. Meanwhile, our Mobile devices present the opportunity for an outsized gain — we have a small chance at making a fortuitous discovery. We overweight that chance in our decision-making and are left feeling compelled to turn to our phones.

We play the lottery with our attention, hoping for outsized gains. We hedge potential losses through multi-tasking.

And ‘round and ‘round we go. “Damned if you do. Damned if you don’t.” The modern predicament of our connected era.

Reasserting control.

I’m tired of telling myself it’s okay to be this tired.

How might we reclaim our time and attention? If anything, it starts with awareness of the problem — we are in uncharted territory when it comes to the interplay of access to information and human biology, which has not evolved to catch up to it’s current environment.

Asserting rationality.

Our natural response to turn to the phone to avert potential losses or seek potential gains is irrational, but it’s human. If we can accept the veracity of our nature, we can take steps to thwart it.

Daniel Kahneman talks about cognitive illusions — those times when our brains can’t see the reality of a situation. They are tricked. He makes an analogy to the Müller-Lyer illusion. A.k.a. this:

The Müller-Lyer illusion

If you’re familiar with the above illusion, you know that our minds can’t help but see the lines as being different lengths — yet it simply isn’t the case. By being aware of the illusion, we can take assert rationality over our mistaken perceptions.

When we feel the innate drive to pull out our Mobile devices, ready to play the lottery with our attention or avert spending our time in a less-than-optimal way, we can take a deep breath, and realize our brains are up to their instinctive tricks.

We can re-center our focus and put our Mobile phones down.

Meditation.

This re-focusing of our attention shares much in common with meditation. Acknowledging distractions that take our focus away from the present — centering on the breath — is critical to meditation.

Interest in meditation appears to be on the rise. Take a look at this Google Trends for “meditation:”

Yuval Noah Harari, author of Sapiens and Homo Deus, has given much thought to modern human predicament. Perhaps it’s no coincidence he has taken to meditation — going so far as to take a month a year to meditate.

Learning.

Perhaps there’s a chance we finally learn: our quest to optimize every moment of our time is futile. It can’t be done, so why chase the impossible?

The future is unknown.

While the future unfolds before us, what’s certain is that there will be no slowing of technology. The demands on our time and attention are only going to become more extreme. What we do about those demands is up to us, but starting from awareness of the problem and taking even small steps to reassert control feel like moves in the right direction.


¹ Android app Locket pegs the number at 110X/day (October 2013) while the TomiAhonen Almanac from 2013 puts the number at 150X.

² I call this streamer’s remorse.

³ GFK, September 2015

Certain section titles are lyrics from Vertical Horizon’s Falling Down.

The Axis of Content Consumption is Attention

The democratization of content may have already happened but it’s far from over. Today, we are all drowning to consume as much content as possible, treading water as we doll out our time to whatever content manages to grab our attention. And no matter what we choose, we never feel like we make the tiniest dent. We’re left dissatisfied and still drowning. The Internet is a flood.

The axis of consumption is attention.

Let’s take a step back before the Internet became widespread.

Twenty years ago, the primary axis for content consumption was how much you were willing to pay to get access to the content. The Internet dethroned content as king. As Benedict Evans put it, “the device is the phone and the network is the internet.”

Today, the axis of consumption is how much attention you are willing to give to any given piece of content. The price of that attention is shifting away from how much you pay for access to the content—because most content these days is near-cheap or free—but how much content you are willing to skip. Your choice to consume one thing comes at the cost of not consuming something else.

In economic terms, the price of your attention is the opportunity cost of missing all the content you choose not to consume. In a world where content is plentiful and effectively free, human attention has become the scarce resource that sets the price.

While it’s nearly objectively true that the type and amount of content available for consumption today is significantly higher quality than that created a few decades ago, are we individually any better off? Are we any happier? I’m skeptical. A look around and people seem more on edge than ever. Perhaps it’s our innate loss aversion at work. It’s our nature to want to avoid losses—but a world that is overrun with content requires every choice we make to cost us our ability to consume something else. We have anxiety—fear of missing out—with regard to the content we choose not to consume.

We are damned if we do, and damned if we don’t.

I must make the best choice.

This “no win” situation leaves us anxious about our leisure time—it must be spent wisely! We can’t afford to make a poor choice when it comes to picking a streaming television show—there’s too much good content out there (and you know it!) to waste your attention on something not worthwhile. So we spend 20 minutes flipping through Netflix trying to decide on something to watch.

We need really good reasons to consume particular content. Social signals are useful—is everyone going to be talking about this show at the water cooler? Who recommends this article? What can I hope to get out of reading this book? Whatever the content offers, the juice better be worth the squeeze.

When it comes to distribution, barriers to content are annoying, at best, and will result in content being skipped or ignored, at worst (this is the more likely outcome). The connection between slow-loading sites and users giving up on content has been researched at length (here’s but one quick set of results from Doubleclick on mobile site loading). The point? Any friction between content and users is going to cost you attention. The less valuable the content, the less friction users will tolerate.

How cheap content affects advertising.

That’s why there is a flight to quality when it comes advertising — users will only tolerate interruptions in content if the content is deemed valuable enough to suffer the ads. Examples are helpful:

  • The Super Bowl. Here you have a time-sensitive event that the majority of Americans consume—and will be talking about the next day. You can’t time-shift your consumption of the Super Bowl because the content’s relevance only lasts for a very short window. The result is commercials are exceptionally expensive. And while the Super Bowl is at the extreme end of this, all live sporting events tend to follow the same pattern — users put up with the advertising friction because the relevance of the content is time-sensitive.
  • Facebook. Facebook content is selected algorithmically. What you see in the stream should be about the best content you can possibly see in your stream. That said, “best” is probably pretty mediocre (no offense to my friends), which is why Facebook’s advertising must be as low-friction as possible—and it is. Ads are targeted specifically to you, which at least removes some of the possible friction of seeing a totally irrelevant bit of marketing. More importantly, the ads aren’t screen takeovers nor do they firewall your feed consumption — they’re just presented in-stream. Even videos auto-play siliently, to be as subtly in your face, passively trying to get your attention as possible.
  • YouTube. YouTube’s pre-roll ads and skippable ads are huge barriers to content consumption, and as such, are much disdained by users. Unlike Facebook, Google’s approach here feels to me like an example of what not to do (And yes, I mentioned this to whoever would listen at Google, but I didn’t exactly have the ear of Susan susan Wojcicki). It’s not bad enough that you sometimes have to wait for a video to load (4 out of 5 will click away if a video stalls while loading — Google), but to have to watch some terribly executed ad for a paltry bit of YouTube video? On a personal note, I never would have opted into YouTube Red but I got it via my Google Play Music All Access plan, which meant I never had to watch any more YouTube ads. The unexpected result? Wow is YouTube so much better without ads. I cannot watch YouTube anymore if it has ads. I will immediately bounce and make sure I’m logged in to Red to avoid YouTube ads. Aint nobody got time for that.
  • Google Search. Now consider Google search. Whereas YouTube ads are painful, Google search ads are “in stream” (you need but scroll past them) so they are low friction. They’re also keyed to your query, which is to say that they are likely to be as relevant an ad as you can get, both in content and in timeliness. It’s no surprise that Google search is still one of the most valuable places to advertise on the web.

The point of the above examples is that advertising in a world that is drowning in content is about making the advertising as friction-free as possible and highly relevant. There is a real flight to quality when it comes to content—users will leave channels that either have low-quality content or content that isn’t worth the price of consumption. Adding noise to channels (and irrelevant advertising is noise) means that users are going to leave that channel for other content-streams that have a stronger signal—higher quality content. Either that or they are going to use tools to shut the advertising down (i.e. ad blockers).

Meanwhile, the wealthy are just going to pay to avoid the advertising entirely. It’s the content equivalent of “white flight” whereby luxury affords you the ability to live in a high-quality content world, free of ads. Scott Galloway has been beating this particular drum for awhile. The rich need not watch ads. I think he’s right.

The high price of attention.

Who doesn’t like having higher quality content that is less expensive? Ads that are being forced to be relevant or, at least, low-friction? Access is incredible.

Yet even as content has been democraticized—in creation, distribution, and consumption—I can’t help but feel anxious, like I’ve lost control of my facilities. I look around at our harried existence when people would sooner risk causing a horrible car accident than to put their phones down and pay attention to the road. Or how I feel the nagging gravitational pull to spend every second consuming or creating some bit of content over the Internet. This anxious existence is the present. Is it the future?

NOTES:
If you are fascinated by attention, take time to read up on the attention economy. Here are a couple resources to get you started:

  • The Attention Economy and the Net — Michael Goldhaber. Prescient read that was written now over 20 years ago (April, 1997)!
  • The Wikipedia article on the Attention Economy is quite useful.
  • Gary Vaynerchuk has indicated that attention is a primary focus of his. He makes mention of attention arbitrage, which I see as relevant to what I call the flight to quality in advertising. Advertising is most impactful when deployed on channels that are low-noise, high-relevance. Watch him talk to Weiden and Kennedy back in 2016 and mention how Zuckerberg is also keyed into something called the “attention graph.”
  • Is it a coincidence that interest in meditation is on the rise? See Google Trends or Google’s Ngram viewer.

A Framework for Understanding Human Decisions—Jobs to be Done

This following post originally written for the FullStory blog, but since I am such a Clayton Christensen fan and have blogged about this topic here in the past, syndicating the post for anyone interested.


Clayton Christensen, author of The Innovator’s Dilemma and Harvard business professor, makes the case that in order to understand what motivates people to act, we first must understand what it is they need done — the why behind the what.

Christensen first articulated this idea in a 2005 paper for the Harvard Business Review titled The Cause and the Cure of Marketing Malpractice when he wrote:

When people find themselves needing to get a job done, they essentially hire products to do that job for them 

Clayton Christensen, Photo by Betsy Webber, Shared via CC2.0

If a [businessperson] can understand the job, design a product and associated experiences in purchase and use to do that job, and deliver it in a way that reinforces its intended use, then when customers find themselves needing to get that job done they will hire that product.

Christensen’s theory has become known as the “Jobs” or “Jobs to be done” theory (“JTBD”) as it’s built around a central question: what is the job a person is hiring a product to do?

What is the job to be done?

How do you satisfy your hunger on your commute?

Professor Christensen tells a wonderful story to illustrate JTBD theory. It’s about a fast food company’s attempt to make a better milkshake. Said fast food company took the classic approach. They identified their target milkshake-slurping demographic, surveyed them about their milkshake preferences, implemented their findings, and didn’t improve milkshake sales whatsoever. What happened?

Christensen tells the milkshake story so well that we recommend you give him a listen (4 minutes, YouTube). Alternatively, the story is transcribed below.

Clayton Christensen talks about milkshakes.

We actually hire products to do things for us. And understanding what job we have to do in our lives for which we would hire a product is really the key to cracking this problem of motivating customers to buy what we’re offering.

So I wanted just to tell you a story about a project we did for one of the big fast food restaurants. They were trying to goose up the sales of their milkshakes. They had just studied this problem up the gazoo. They brought in customers who fit the profile of the quintessential milkshake consumer. They’d give them samples and ask, “Could you tell us how we could improve our milkshakes so you’d buy more of them? Do you want it chocolate-ier, cheaper, chunkier, or chewier?”

They’d get very clear feedback and they’d improve the milkshake on those dimensions and it had no impact on sales or profits whatsoever.

So one of our colleagues went in with a different question on his mind. And that was, “I wonder what job arises in people’s lives that cause them to come to this restaurant to hire a milkshake?” We stood in a restaurant for 18 hours one day and just took very careful data. What time did they buy these milkshakes? What were they wearing? Were they alone? Did they buy other food with it? Did they eat it in the restaurant or drive off with it?

It turned out that nearly half of the milkshakes were sold before 8 o’clock in the morning. The people who bought them were always alone. It was the only thing they bought and they all got in the car and drove off with it.

To figure out what job they were trying to hire it to do, we came back the next day and stood outside the restaurant so we could confront these folks as they left milkshake-in-hand. And in language that they could understand we essentially asked, “Excuse me please but I gotta sort this puzzle out. What job were you trying to do for yourself that caused you to come here and hire that milkshake?”

They’d struggle to answer so we then helped them by asking other questions like, “Well, think about the last time you were in the same situation needing to get the same job done but you didn’t come here to hire a milkshake. What did you hire?”

And then as we put all their answers together it became clear that they all had the same job to be done in the morning. That is that they had a long and boring drive to work and they just needed something to do while they drove to keep the commute interesting. One hand had to be on the wheel but someone had given them another hand and there wasn’t anything in it. And they just needed something to do when they drove. They weren’t hungry yet but they knew they would be hungry by 10 o’clock so they also wanted something that would just plunk down there and stay for their morning.

Christensen paraphrasing the commuting milkshake buyer:

“Good question. What do I hire when I do this job? You know, I’ve never framed the question that way before, but last Friday I hired a banana to do the job. Take my word for it. Never hire bananas. They’re gone in three minutes — you’re hungry by 7:30am.

“If you promise not to tell my wife I probably hire donuts twice a week, but they don’t do it well either. They’re gone fast. They crumb all over my clothes. They get my fingers gooey.

“Sometimes I hire bagels but as you know they’re so dry and tasteless. Then I have to steer the car with my knees while I’m putting jam on it and if the phone rings we got a crisis.

“I remember I hired a Snickers bar once but I felt so guilty I’ve never hired Snickers again.

“Let me tell you when I hire this milkshake it is so viscous that it easily takes me 20 minutes to suck it up through that thin little straw. Who cares what the ingredients are — I don’t.

“All I know is I’m full all morning and it fits right here in my cupholder.”

Christensen concludes:

Well it turns out that the milkshake does the job better than any of the competitors, which in the customer’s minds are not Burger King milkshakes but bananas, donuts, bagels, Snickers bars, coffee, and so on.

I hope you can see how if you understand the job, how to improve the product becomes just obvious.

Source: Clayton Christensen, YouTube

When the most direct route is the wrong way.

Christensen’s story about milkshakes implies that the traditional approach — asking a logically defined audience of milkshake consumers “What would make our milkshakes better?” — may be a waste of time.

Maybe we shouldn’t be surprised: this approach confuses the means (The milkshake consumers) with the ends (Satisfying hunger, boring commute, whatever job). The result is “a one-size-fits-none product,” per Christensen, that does nothing for sales.

A business that organizes around solving for the actual needs of consumers has a clear reason for being because it’s those needs — those objectives — that are driving a customer’s behavior in the first place.

Forget the needs of your consumers at your own peril.

 

Method to the madness.

JTBD brings to our attention something we already know: everyone has reasons for the choices they make — a need, desire, self-actualization, whatever! Shakespeare wrote about this quintessentially human insight some 400 years ago in Hamlet when he penned, “Though this be madness yet there is method in it.”

Understanding the method behind the madness is about having empathy for the user.

When it comes to building products, success requires applied empathy towards better solving needs. That’s why it’s important to question whether features we’re building or product branches we’re developing will do the job better than [something else].

If the development we’re advancing is done without the customer need in focus, we might find we’ve developed the most amazing product that no one wants. (Like the piston-powered airliner — see Benedict Evans on The Best is the Last.)

Harvard Business Professor Theodore Levitt famously quipped, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!”

Putting Jobs to be Done to work.

Using JTBD to understand consumer needs can be as easy as asking, “What did you turn to the last time you needed to do this?” In Christensen’s milkshake story, it helped consumers to think back on a previous time they were in the same situation and needed that job done — that is, the milkshake buyer needed something to satiate their hunger and their boredom on their long commute to work.

Reflecting on the products you “fired” can help clarify just what you needed to get done. In this regard, JTBD can be used to explain how many once-successful businesses were displaced by competitors that simply did the job better. Examples:

  • Netflix doing the job of Blockbuster — “I need something to entertain me,”
  • Uber, Lyft replacing taxis (and impacting the rental car business) — “I need to get from point A to point B,”
  • Google — “I need ______” … all the things,
  • Amazon — “I need ______” … all the things,
  • Smartphones — “I need _____” … all the things!

The question, “What does our product do better than the competitors?” is at the center of a recent post by Jason Fried (Signal vs. Noise), who channeled JTBD when he wrote, “What are people going to stop doing once they start using your product?”

If you can’t answer this question clearly, could you reasonably expect a potential customer to?

While JTBD is often relegated to business discussions, it can be extended to think about just anything — your career, your hobbies, your relationships. You can ask yourself, “Why do I [X]? What is the job I’m getting done through [Y]?” Applying the JTBD frame introspectively may surprise you.

Applying Jobs to be Done to customer experience on the web.

“Way back when” we first built FullStory it was to solve an explicit job that we needed done: we needed to understand what users were doing on a site through high-fidelity session playback, down to the movement of the mouse.

That was only the beginning.

We soon realized that since we were already capturing all the data about user interactions on a web application, we could do other jobs, too.

  • We could do the job of visualizing aggregated, on-page user clicks — so we built Page Insights with Click Maps (Like heatmaps but with actionable clarity, that is, better!).
  • We could do the job of segmenting users by behaviors in order to better understand what job they are trying to do —FullStory users can now build on-the-fly marketing funnels based on specific, user-defined events with OmniSearch (e.g. find users who are referred by Google, add a product to their cart, and complete the sale).
  • We could do the job of finding where users are getting frustrated — so we started identifying frustration events. We call them rage clicks, error clicks, and dead clicks.
  • We could do the job of visualizing the data in aggregate — so we built easy-to-grok graphs that autobuild based on segments (We call them “Searchies”).

When it comes to the job of building better customer experiences on the web, there are many jobs to be done — whether it’s the jobs of designers, engineers, product managers, marketers, customer support.

We all have a lot of work to do to make the web a better place.

This post originally made for the FullStory blog. (Come find me there for regular bits like this!)


Further reading

  • JTBD has also made its way into a 2016 book called Competing Against Luck by Clayton Christensen. Also see The Innovator’s Dilemma
  • The average-driven, “one-size-fits-none” milkshake reminds us of the problem with averages
  • Everything Bagel 340 Calories, Dunkin’ Donuts ~600 calories, Snickers 250 calories, Banana ~100 calories, McDonald’s Vanilla Milkshake (M) 610 Calories

Dunbar’s Number, Broken Social Networks, and Back Scratches

Monkeys grooming each other in India.
A photo I took of monkeys grooming each other in India circa 2008.

My brother passed on an article in The New Yorker from a couple weeks back titled The Limits of Friendship. It’s an exposition on Oxford anthropologist Robin Dunbar’s discovery that humans organize into social groups that tend to range from 100-200 people, with the average—150—being an optimal rule of thumb. This is known as Dunbar’s number.

The discovery was made through observing the correlation between the size of an animal’s frontal lobe whereby the larger the frontal lobe (or smaller), the larger the social group size for that animal. Applying this understanding to human brains, “Judging from the size of an average human brain, the number of people the average person could have in her social group was a hundred and fifty.”

Here’s more on the number (emphasis added):

The Dunbar number is actually a series of them. The best known, a hundred and fifty, is the number of people we call casual friends—the people, say, you’d invite to a large party. (In reality, it’s a range: a hundred at the low end and two hundred for the more social of us.) From there, through qualitative interviews coupled with analysis of experimental and survey data, Dunbar discovered that the number grows and decreases according to a precise formula, roughly a “rule of three.”

So the 150 (an average) cuts down to 50 and then 15 and 5. I suppose you can’t have 1.66 people, but then again maybe you can?  It’s an interesting question given most of us (present company included) adhere to long-term monogamous relationships.

These optimal human social group sizes are seen ini average modern hunter-gatherer societies (148.4), military company size from the Roman Empire to modern times (with companies having sub-groups that also meet the Dunbar rules).

Applying Dunbar’s number to me.

When it comes to my day-to-day networks, I work with about 17-20 people on a week-in, week-out basis, which feels about right in that I’ve a place that supports that group in a meaningful way, but it also leaves me a little stretched and I can’t (and don’t) have the closeness and support at that group level. Cut by Dunbar’s number again, my inner “work” circle is around three people.

It’s harder to create tiers of my friends, but I’d count about four or five people in my closest circle. It’s hard to expand that circle meaningfully.

Sidebar. I’m reading Peter Thiel’s Zero to One, which at about halfway done (and a short couple hundred pages), is well worth your time. Therein, Thiel talks about your coworkers and how you should work with people you like. He made the following insight, which is powerful in it’s implications:

Since time is your most valuable asset, it’s odd to spend it working with people who don’t envision any long-term future together. If you can’t count durable relationships among the fruits of your time at work, you haven’t invested your time well—even in purely financial terms.

I’m just going to leave that there.

Dunbar’s number and the brokenness of social networks.

I’ve got about 235 Facebook friends, 633 LinkedIn connections, 800 Twitter followers (I’m only following 90 or so), and 800 people in my Google+ circles.

What a bunch of meaningless numbers. #Amiright?

I’ve talked about digital isolation at length before (the last post on this blog over a year and a half ago), but the curated social network is obviously broken. Don’t we all know this? I’d eagerly hear any arguments that “Social Networks” as we know them grow meaningful relationships. The interactions they foster (aside from the 1:1 or 1:few interactions) are ephemeral and lack depth. While it costs almost nothing to “like” someone’s post on Facebook (or comment), we expend time and effort in against those types of interactions that in aggregate is high-cost and low-return.

Is there a better solution? Probably. And thoughtful minds have been trying to find it for at least the last 5-10 years.

The New Yorker article had something to say on this front, as well, via Dunbar:

There’s no question, Dunbar agrees, that networks like Facebook are changing the nature of human interaction. “What Facebook does and why it’s been so successful in so many ways is it allows you to keep track of people who would otherwise effectively disappear,” he said. But one of the things that keeps face-to-face friendships strong is the nature of shared experience: you laugh together; you dance together; you gape at the hot-dog eaters on Coney Island together. We do have a social-media equivalent—sharing, liking, knowing that all of your friends have looked at the same cat video on YouTube as you did—but it lacks the synchronicity of shared experience. It’s like a comedy that you watch by yourself: you won’t laugh as loudly or as often, even if you’re fully aware that all your friends think it’s hysterical. We’ve seen the same movie, but we can’t bond over it in the same way.

This massive shortcoming of digital social interactions harkens back to the inability to “be there” in virtual space the way we can be for others in real-space. Even still, “being there” in the sense of being present to others (and not distracted-while-in-others-presence by engaging our devices every spare moment) is being eroded as digital grabs our attention in the hundreds of spare moments throughout our day.

At least as far as relationships are concerned, some of those spare moments were ways to be present to others—gifting friends and family your time, even if they waste it.

Perhaps we should be grooming each other.

On an even deeper level, there may be a physiological aspect of friendship that virtual connections can never replace. This wouldn’t surprise Dunbar, who discovered his number when he was studying the social bonding that occurs among primates through grooming. Over the past few years, Dunbar and his colleagues have been looking at the importance of touch in sparking the sort of neurological and physiological responses that, in turn, lead to bonding and friendship. “We underestimate how important touch is in the social world,” he said. With a light brush on the shoulder, a pat, or a squeeze of the arm or hand, we can communicate a deeper bond than through speaking alone. “Words are easy. But the way someone touches you, even casually, tells you more about what they’re thinking of you.”

Dunbar already knew that in monkeys grooming activated the endorphin system. Was the same true in humans? In a series of studies, Dunbar and his colleagues demonstrated that very light touch triggers a cascade of endorphins that, in turn, are important for creating personal relationships.

Who doesn’t like to have their back scratched? Serious question.

There’s something to being touched. Who doesn’t take note when a coworker pats you on the shoulder? We register touch in a meaningful way and while I can’t help but think of tapping the “like” button as a digital touch, I doubt highly that receiving likes sets in motion the bonding/endorphin response Dunbar sees with monkeys grooming.

Indeed, a firm handshake is still the common greeting between people when they physically engage each other, but a handshake isn’t sustained. Nor are handshakes common beyond initial formalities. Just thinking back on the last few days, I’ve hardly shaken anyone’s hand. Handshakes simply aren’t that common.

Meanwhile you’ve got hugs. I love a good hug as much as the next guy and while I can’t put my finger on a specific source, Google seems to support hugs as a source of oxytocin, which is another socially-dependent, “connecting” hormone.

On the other hand, there’s hand holding. I’m reminded of one of the more endearing things I’ve experienced through getting to know Indian culture, which is that male friends often hold hands. This has resulted in times when I’ve sat down next to my father-in-law and held his hand. It’s hard to express just how surprisingly comforting that specific sort of engagement is.

We forget our base animal nature at our own peril. I don’t know how we get back to grooming-as-a-way-to-connect with others. I do think I’ve seen brushing my daughter’s hair as a way to connect with them.

And then there’s always a good back scratch, right?

So what do we do?

In our age of solving the world’s problems through digital efforts, I’m left wondering how we can get back to something physical in the way we engage with others. Physical touch seems like a missing ingredient in the equation. A rapidly evaporating ingredient is being physically “there” for others.

Are there ways to digitally solve this problem? While the Apple watch offers a way to share a “touch” through haptic feedback, I’m skeptical of this as the solution we need.

Got any other ideas?

 

Meatza! Meatza!

So last night I made a meatza for the second time. For those of you who’ve not had or heard of a meatza, it’s basically a pizza you make using ground beef for the crust.

I consider myself pretty carnivorous and I love pizza (Pizza and beer are make for the one-two punch to knock me completely off the paleo bandwagon*), but I have to confess: the notion of a meatza just didn’t appeal to me at first blush. Plus, I’d made a few attempts at the almond flour crust pizza and been a bit disappointed. It’s a lot of work to make an almond flour pizza, so when the result consistently disappointed, I just gave up on a low-carb pizza solution.

It was only when a few friends mentioned they had enjoyed meatza that I decided to give it a shot. I knew Richard had made meatza based off a meatza recipe from the Healthy Cooking Coach, so that’s where I scrounged up the basic directions.

Now, making a pizza is as simple as making a crust, adding toppings, and baking it in the oven. With a meatza then, the most complicated part here is making the crust. And it’s also the part that you’ve got to get right to make sure your meat pizza is delicious!

Enter my Italian grandfather’s meatball recipe. My Pop has a fairly famous (within the family anyway) spaghetti recipe for sauce and meatballs. I’m going to skip the sauce part for now because it’s a bit more labor-intensive. In a pinch, you can just use some spaghetti sauce from the store.

Anyway, it’s the meatball recipe that really knocks the meatza crust out of the park, so without further ado, here are the ingredients:

  1. I use a 12×17 rectangular pan
  2. Get a decent sized mixing bowl to mix the meat
  3. Start pre-heating the oven to 450
  4. For ease of mixing into the beef, I go ahead and get all my seasonings out and into a little bowl (this will make more sense in a second):
    • 1/2 cup grated Parmesan cheese**
    • 3 tsp salt
    • 1 tsp caraway seeds (this is the magic ingredient in my opinion)
    • 1 tsp oregano
    • 1 tsp garlic salt
    • 1 tsp coarse ground pepper
    • 1 tsp red pepper flakes (optional)
    • 2 lbs. ground beef (80/20 is fine)
  5. Seasonings mixed, put the beef into the mixing bowl, crack into it two eggs and mix beef and eggs first. This is because the runny egg can cause the seasonings to clump together and it just makes mixing a lot messier and less uniform if you don’t mix beef and egg first prior to adding the seasonings. Thus, the need for to pre-mix the seasonings — your hands are all covered in beef at this point, so you just have to pour in the seasonings!
  6. Add seasonings and Parmesan cheese and mix well! (Note: you can also do the Parmesan after the egg/beef pre-seasonings if you want)
  7. Take the mass of mixed beef and slam it onto your pan. BAM!
  8. Flatten it out: you should be able to just about cover a 12×17 pan with the beef
  9. Oven pre-heated, throw it in there for 10 minutes!

At this point, I immediately start pre-cooking certain ingredients that need a little extra attention; in my case, it’s sliced mushrooms and diced green pepper sauteeing in a cast iron skillet with some pasture butter.

Ten minutes up, take the crust out of the oven. Set the oven to Broil (it will take a few minutes to heat to this point).

You’ll now notice the crust has shrunk considerably and there’s a good bit of rendered fat in the pan. Pour the fat out of the pan. Optional step: take a paper towel and wipe up any extraneous beef “stuff” that is exterior to the crust. This is simply because it’s not part of the crust and if you leave it, it could cake to the pan and make clean-up more of a hassle.

Now, you just add your toppings. For me, I added Newman’s Sockerooni spaghetti sauce as a base, then a layer of pepperoni, then a base layer of mozarella cheese. From there, I add sliced green olives, feta cheese, the mushrooms and green peppers, more pepperoni, and top it all off with more cheese.

Broiler heated up, I throw it all back in the oven until the cheese is done. It cooks very fast at this point! Like five minutes is almost too long in our case, so make sure you keep a close eye on your meatza!

Take it out, let it sit for a minute or two, and slice and serve. I find this slices into six solid size pieces — I can eat a half a pizza under the right conditions, but one slice of meatza and I am good to go.

Enjoy!

Oh one more thing: I really like this alternative pizza combination. It is downright delicious and I don’t get that “sigh this isn’t really pizza” sensation at all. I’m not sure it is pizza, really, but it is really good, so who cares!

* Beer, chips, and salsa at Mexican restaurants being right up there, too.

** Were you to make the meatballs, you’d actually use two slices of bread, wet, and torn into small pieces OR 1 cup of plain bread crumbs (you can sub almond flour if you like)

Shoot first and ask questions later (And have kids even if you don’t want to) (Updated, sorta)

Below is a response to Patri Friedman’s recent post on his pro-parenthood bias:

I’m late to the party.

My first kid is about eight weeks from greeting the world (and piercing my ears for the first few months or years!), so I’ve been giving the whole parenthood thing a lot of thought over the past few months. Incidentally, though we intended to have kids eventually, it happened sooner than we were planning.

Such is the unpredictability of life.

Which brings me to a point that you didn’t make, one that Bryan Caplan has alluded to via some scrounged up surveys of parents. The data Caplan found indicates that almost no one regrets having kids. Most parents wish they had *more* kids than they end up having. And adults who don’t have kids also tend to wish later that they had reproduced (For sake of saving a few words or directing others, see this post on the data).

Even though this backward-looking data supports the argument to have children, I don’t think it’s necessary to conclude that you should reproduce.

We are apparently quite bad at predicting what will make us happy in the future. For a nice read on this subject, I recommend picking up Dan Gilbert’s “Stumbling on Happiness” (and if you are too busy to do that, just read my selected quotes from Stumbling on Happiness here). A theme of Gilbert, which is also a theme of books like Taleb’s “The Black Swan,” is that everything is much more complex than we make it out to be, and this complexity makes our grossly simplified forecasts fundamentally flawed — useless at best — harmful at worst. As applied to those people who choose not to have kids, as much as they think they know what will make them happy in the future, they are almost certainly going to be wrong about their predictions.

Accepting our inability to know what will make us happy but understanding that it is a biological imperative to reproduce and realizing that it will be much more expensive to reproduce past our reproductive prime, all signs point to shooting first and asking questions later.

Of course, to have kids or not is no simple binary choice. Procreating makes for an incredibly “bushy” (complex) life experience. Kids add randomness and depth to our lives in ways that we can’t possibly foresee but ways we will likely enjoy*. Sure, by having kids you’ll forgo some experiences as you engage life by yourself or with your significant other, but the experiences you’ll forgo by not having children are wholly new and unpredictable — the life of an entirely new human being: you, your significant other, and your kid(s).

In short, I liken parenthood to doing first and understanding later. This is a good rule of thumb to apply across almost all facets of life — lots of iterations make for lots of experiments through which we can learn about and enjoy life. Not having kids is a choice to have a drastically less-interesting, much more simplistic and sterile (literally and figuratively) life. I wouldn’t wish that on anyone I care about.

So I shake my head when friends make that choice.

Finally, I don’t really understand how anyone can understand humanity through the lens of evolution and not have children. Having kids means getting in touch with our core humanity — our biological nature — and living out the imperative coded in our DNA: to create life. Reject your hardwired nature at your own risk.

For my particular contribution to furthering human evolution, our kid is getting a mix of the DNA from a caucasion (me) and an Indian. Gene-swapping for the win!

* Another SoH idea is that we are better off charging into the unknown than doing nothing because our mental immune systems are better at justifying our decisions after the fact than they are at managing grief of what could have been.

** Not a brightline conclusion, I know — you can always adopt or potentially figure out other methods to have children after you pass your reproductive time.

Update: So despite my comment being one of the last out of the 170+ comments to Patri’s post, I got a couple shout-outs in follow-up posts by Patri (here and here). And I had to throw in one more comment, which I’ll copy below, which is more or less an application of Pascal’s wager to the decision to have children. So here’s my second comment:

Another point regarding the buyer’s remorse stats — if the majority of people who don’t have kids ultimately regret it, it seems highly likely that at least one person in a committed sterile-by-choice relationship will regret their decision. Yeah, people often select mates based on whether or not they want to have kids, but these same individuals also often change their minds about their choice (thus the tendency towards regret).

And this often leads to wrecked, otherwise fantastic relationships. I’m sure that I am biased in making this observation — I know someone who clearly regrets not having children. His spouse of twenty years, on the other hand, seems perfectly content. And it has put an enormous amount of unspoken strain on their relationship, not to mention, it is a point of intense sadness for this individual.

I see a slight parallel to religion here. Having kids because you expect it to be somehow fulfilling is a bit like hoping for a reward in heaven when you die — a life lived adhering to some arbitrary religious codes requires a lot of obvious work with less than obvious rewards, not unlike the decision to have kids.

Except that is where the similarity breaks down. With the choice to procreate, not only do we see the direct benefits of our own parents’ choice (as in, I am alive and I believe my life is not only good for me but also for my parents), we see the benefits accruing to our friends and relatives.

I mention all of this because the anti-procreation argument assumes that you know without a reasonable doubt that you will be happier/more fulfilled/better off without children. Not only is there a lot of observational/anecdotal/statistical evidence suggesting you might be wrong, there’s also the reality thatwe are very bad at predicting what will make us happy in the future. The cards, it seems, are very much stacked against those who believe they’re better off without children.

So even if you don’t want to now, have kids anyway. To me, this argument is a version of Pascal’s wager that actually makes sense.

Cross-Pollinating Ideas via the Internet

I was just leaving a comment on Richard Nikoley’s latest blog post, Vitamin K1 vs. Vitamin K2 concerning Natto, a fermented soy food from Japan that contains a huge amount Vitamin K2. I was specifically pointing out that fish gonads, which are considered to have a high K2 concentration, something I had learned over at Stephan Guyenet’s Whole Health Source: Seafood and K2, are absolutely dwarfed by the K2 concentration in natto^. I had first learned about natto and the importance of fermented foods via Seth Roberts’ blog (See his Fermented Food Category). Put differently, my comment took data from three different sources and presented it in a coordinated, collaborative manner.

Though this might not be the best term for it, I call these occurrences examples of the “cross-pollination” of ideas. It’s a collaborative, unpredictable, uncoordinated, complex effort whereby ideas and information gleaned from disparate sources are examined in relation to one another. It is knowing the trees and seeing the forest. The goal is to create more useful ideas and better information, and then spread this new knowledge far and wide. And do it over and over again. If this reminds you at all of evolutionary processes, not only are you catching my drift, you’re cross-pollinating.

Idea cross-pollination is amplified by the Internet. Historically, a powerful idea or discovery could languish in obscurity, the pet project of an experimenter who works in the silo of his own research. This was the case with Isaac Newton who had discovered/created calculus decades before it was made public.

Compare how calculus languished to the ideas contained within Gary Taubes’ Good Calories, Bad Calories, a book written by a non-specialist (Taubes is a writer, not a scientist) that looks at an enormous amount of nutrition-related research, sees common threads across the data, and presents it all in once place, calling into question the mainstream nutrition mantra that low-fat is healthy, fat will kill you, and people are obese because they eat too much. GCBC was created by having the power to examine the research of a number of disparate specialists and see the big picture.

A book like GCBC is made possible by the Internet because it becomes much less likely that ideas remain within the dusty silos of specialists. The Internet takes curiosity, search, and a great deal of disparate computing power*, and uses them to spread ideas much, much faster. Non-specialists(like Taubes or me) then have the pleasure of making fortuitous discoveries of connections across specialties.

Of course, the means by which cross-pollination is accomplished are unpredictable: we can’t plan a course to find them. All we can do is cast a wide net, examine a lot of ideas, follow our curiosity, and let our organic pattern recognition software do it’s thing. This is very much a “learn by doing, then by thinking” concept. If we dabble in this gamble enough, every once in awhile, we will hit the idea jackpot.

Mind, the idea of idea cross-pollination isn’t really an external process across disparate people, at all. To the extent that we learn ideas, we store copies** of them in our brains, forever taking the ideas with us (A reason legal boundaries around mental concepts is fundamentally absurd). Indeed, it seems that the majority of my intellectual growth has been predicated on being able to cross-pollinate within these internalized knowledge stores. I am always trying to reconcile previously learned ideas with new ones. In this way my organic human network, a human brain, is mimicked by the inorganic mesh of networks we call the Internet.

In sum, cross-pollination of ideas has always been occurring — it is a human specialty, warts and all. Thanks to the Internet, it’s happening more, and we’re getting an explosion of ideas/concepts/knowledge as a result.

^ It seems that Natto is an obscure bastion of nutrition, which may be due to the fact that it (apparently) doesn’t taste the greatest. I’ve yet to get my hands on any as it is exceedingly hard to find. Rest assured, I will be eating some just as soon as I get a chance to check out the only Japanese grocery store in Atlanta.

* As in, human minds that work to understand and pull together the data they discover.

** Albeit imperfect, frequently mutated copies, but this, again, can make for fortuitous idea creation, and as far as I can tell, acts as a positive, dynamic force.

Seth Roberts and the Shangri-La Diet

I cite Seth Roberts’ blog a great deal over at Linked Down. Seth is a Psychology Professor at Berkeley and an avid self-experimenter. I’ve learned a great deal from subscribing to his blog.

For those who don’t know, Seth Roberts created the Shangri-La Diet, which is a diet centered around reducing the association between flavor and caloric load. I haven’t read the book, so this is an approximation of how it works, but the gist is that the more correlated taste is to caloric load, the greater hunger can be, the harder it will be to cut calories, and the higher your body’s set point for weight will be. “SLD” hacks this relationship via ingesting flavorless calories within certain windows of time. These flavorless calories reduce the brain’s association of high energy density and high flavor. Interestingly enough, the macronutrient source of the calories may be unimportant: you can do SLD with oil, sugar water (so long as it is flavorless), or nose-clipping while eating protein. If you’re skeptical about this diet, I suggest taking a trip over to the SLD Forums and be prepared to see plenty of evidence that SLD works.

Even as I have not tried SLD, it is a fascinating idea and it seems that anyone who is serious about better understanding why we gain weight and what regulates hunger and adiposity must take it seriously enough to figure out how it fits into the big picture of human health. Barring that gargantuan task, it’s at a minimum another way to try and hack weight loss if your current regiment isn’t cutting it for you.

I mention all of this because I stumbled on a 2008 interview between Roberts and Gary Taubes, author of Good Calories, Bad Calories, which I’ve blogged about exhaustively. What a great thing to find that two people I admire had a thoughtful discussion and, even better, said discussion has been made available to me?

Blogging, science and the internet FTW.

Back to trying to understand how SLD fits into the grand scheme of human physiology. An interesting comment was made at the bottom of Part 13 of Roberts’ Interview of Gary Taubes:

I’ve thought a lot about how consuming tasteless food could supress hunger. My favorite theory is that it is similar to what happens when an animal is hibernating. The “magical” appearance of calories fools your body into thinking it is living off its fat and then it actually does so.

This comment reminded me of how the metabolic pathways while fasted are the same as when we consume a diet of only fat and protein. One effect of low-carb diets is appetite suppression. Could the common theme here simply be that both SLD and low-carbohydrate diets and/or fasting act to “trick” our bodies into switching to a non-hungry state?

Obviously that can’t be the entire picture because insulin is the storage hormone that is unleashed by carbohydrate consumption (though less so with fructose).

This issue is worthy of further thought.

Learn by Doing, Then by Thinking

Danger: Hubris
Creative Commons License photo credit: toolmantim

Seth Roberts talks about his graduate school days and how he got into self-experimentation by way of the axiom that, “The best way to learn is to do:”

And then I was in the library and I came across an article about teaching mathematics and the article began, “The best way to learn is to do.” And I thought “Huh well that makes a lot of sense.” And I realized you know that it was a funny thing that that’s what I wasn’t doing: I was thinking. And I also thought to myself well I want to learn how to do experiments. And if the best way to learn is to do then I should just do as many experiments as possible as opposed to trying to think of which ones to do. And that was really a vast breakthrough in my graduate training and everything changed after that.

Quoted from a 10 minute presentation by Seth Roberts

Roberts’ goes on to apply these ideas to graduate students and professors, when he notes, “Grad students … worry too much about what to do. Professors often … do something more complex than necessary.”

This is a simple, but enormously important idea: we learn more by doing first than by thinking first. The idea lies at the foundation of empiricism. Despite its simple power, it seems that across all aspects of life, we show a clear preference for thinking over doing. Why?

One reason may be that we overestimate our ability to “figure things out” by thought alone—generally, this is hubris. Another reason for this preference for thinking is poor assumptions. For the novice, the presumption is that less experience, or fewer trials under your belt, must be supplanted with more reasoning and thought. Alternatively for the expert, the problem is reliance on accumulated experience to create a basis for reliable reasoning and thought.

Regardless of the “Why?,” thinking before doing causes problems. Specifically:

  • Thinking results in unnecessary complexity, which obfuscates our ability to interpret results,
  • Thinking sets expectations, biasing analysis towards certain results, and
  • Thinking is time-intensive, reducing resources that could be used doing.

These problems hinder our ability to learn—not just in scientific experiments, but in virtually every aspect of our lives. Here are just a few examples where I’ve seen the problem manifested:

  • Our education system is founded on thinking over doing. School boards think through what subjects students should learn. Even when choice is introduced such as in college, there are enormous costs to trying a lot of disparate subjects. Not surprisingly, students get locked into fields of study only to learn when it’s too expensive to do anything about it that they don’t particularly enjoy their chosen major. Conversely, look at blogging, which seems to result in a great deal of knowledge gathering, but is driven heavily by random curiosity.
  • More on blogging. Perhaps the triumph of blogging lies in the thoughtlessness of it. Sure, you put plenty of thought into a blog post as you are writing it, but unlike writing a book, the blog post is so much cheaper that you end up having many more iterations and much less “thinking” behind each individual post. Contrast blogging to the editorial thinking that is put into mainstream journalism. This thinking results in a lot of censorship — not in the classic “You can’t use that word” sense, but in the “I think your idea should be altered in X, Y, Z ways.” The result? More complex ideas. Fewer ideas. Bad ideas. (Added 4/17/09)
  • The same problem is seen with career choices. We think our way into a certain career versus learning what works and what doesn’t work by simply trying out different types of work. We try to think our way into figuring out our passions. It just doesn’t work.
  • Or apply the idea to William Glasser’s Control Theory. Glasser argues that it is difficult to impossible to change what we think or feel about something that happens to us. Thus, our best course of action is to simply do something.
  • Or consider another book: Daniel Gilbert’s Stumbling on Happiness. Gilbert makes the point that, “We insist on steering our [lives] because we think we have a pretty good idea of where we should go, but the truth is that much of tour steering is in vain … because the future is fundamentally different than it appears through the prospectiscope.” Thinking through what we want is something we all do, yet it rarely is effective at leading to happiness. How often do we finally get what we want only to realize that the experience is not what we expected? This is a failure of thought.
  • Nassim Taleb harps on overreliance on thinking all the time. The Black Swan is essentially a book about hubris and the misguided belief that we can think through everything. As another example, Taleb doesn’t read the news because it formalizes thought, effectively handicapping our cognitive function by creating bias. In Nassim Taleb’s recent interview on EconTalk, he talks about “tinkering,” which is more or less just trying different stuff out and seeing what works, as a means to learn.
  • Or look at thinking over doing as it pertains to governments and political debate. Was there ever such an embodiment of preference for thinking over doing? Every government generally and every government program specifically is a thought-out experiment tested on a massive scale. Should it come as a surprise that governments and government programs are so dysfunctional? Observe how political philosophers consistently prefer thought to action, a la Folk Activism, dismissing attempts at trial and error or ignoring the importance of seeking new frontiers for experimentation, while arguing, “We’ve yet to see pure [ socialism | capitalism ]; therefore, you can’t say it wouldn’t work!”
  • I haven’t read Arnold Kling’s Selfish Reasons to Have More Kids (Preface), but that’s at least partially because it hasn’t been published yet! In thinking about the question of children, two thoughts come to mind in relation to the doing/thinking problem (And both relate to Kling’s review of a study about how “Almost no one regrets having kids“:
    • Couples who choose not to have kids have overthought the problem and will almost certainly regret their decision not to have kids.
    • Parents who think they should only have two kids (for example) will likely end up wishing they had had more—it seems parents tend to think they should have more kids than they end up having!

    Having kids isn’t like waking up and making an omelette, so I realize that this one fits into the doing-vs-thinking paradigm a bit loosely, but nonetheless, it’s just another example of how thinking fails. (Added 4/17/09)

  • Life is the result of trial and error performed on a massive scale and is ongoing. As complex as a DNA molecule may be, the individual building blocks are simple. So here’s an example of doing (DNA replication) and simplicity leading to unfathomable complexity—life. Evolution is the triumph of doing and is clearly a thoughtless process. (Added 4/17/09)

As Seth Roberts realized in his graduate days, “I should just do as many experiments as possible as opposed to trying to think of which ones to do.” But why does doing first work better than thinking first? Perhaps it is because doing is fundamentally an iterative process: doing is trial. The idea of trial and error as a method of learning means making mistakes and learning from them. Making mistakes and figuring out what doesn’t work can also be desirable as evidence of absence. I further wonder if it is the sheer number of trials that spur the creation of knowledge. Could it be that the more experiments/trials/iterations, the greater the chance of winning the lottery and learning something truly worthwhile? Maybe so.

I can’t help but conclude that, regardless of the reasons, thinking should almost always be put on hold in favor of action. Stop thinking and start doing. Follow whims, opportunities, gut instincts, and curiosities. Observe as much as possible. Expect failure and realize that it is through innumerable failed attempts that one can stumble on success.

Giving up the Mouse: Use Hot Keys [Grind Skills]

Day 88: Mice? Mouses?
Creative Commons License photo credit: tsmall

In our computer-driven age, quite possibly the greatest “Grind Skill” that everyone could benefit from, aside from knowing how to type, is knowing and using keyboard combinations to perform necessary functions in lieu of using your mouse!

The mouse (or touchpad in the case of laptops) is a fantastic and integral part of using a computer. It has made the computer incredibly user friendly and it’s one-size-fits-all applicability means that it is the de facto universal execution device all computer users have grown to love.

Handicapping your productivity with a little mouse

Unfortunately, near-total reliance on a mouse (As in, using it for almost every computer process except typing) will dramatically and unnecessarily handicap your computer use and waste untold hours, and ultimately days, of your time. This is because whenever you have to switch from typing mode to use a mouse, your hand (typically right) must move off of the keyboard and over to the mouse and then back again. For a very simple operation like bolding a word or copying and pasting some text, the time taken to move hands off the keyboard and back is insignificant by itself but adds up to an incredible amount of time when added up. When you consider that computers are here to stay, and for the forseeable future, we will continue to use QWERTY keyboards and a mouse or other cursor-based hardware to navigate the virtual spaces of software, hardly any grind skill could be more important than maximizing the use of both devices. Since the keyboard is how we input data, our hands actually produce* when they are kept at the keyboard. The more time they stay on the keyboard, the more time we produce and the more efficiently we use our time.

That the problem is waste time moving hand to mouse and back is something you must first acknowledge on a very basic level, and I’m not advocating throwing away your mouse! What I am advocating is that you wake up to how a mouse is handicappign your productivity and actively choose to seek out and implement ways to use the keyboard to accomplish tasks that you normally relegate to the mouse!

If you’re new to the idea of using your keyboard instead of your mouse to “do stuff” on your computer, the idea of learning thousands of tips and tricks on the keyboard to improve your efficiency is no doubt daunting. Thankfully, no one who has mastered this grind skill was required to memorize them all at once. The beauty of this grind skill is that it can be implemented piece-wise, and the skill will build on itself over time. The key is being aware of the problem and working to improve your implementation!

For those of you who are already using your keyboard for certain tasks (I include myself in this category, obviously), be aware that you could almost certainly be using more hot keys and keyboard combinations. As with so many grind skills, mindfulness about how we spend our time on mundane tasks is the key to improving those tasks and making ourselves more efficient so that we can stay on top of the important jobs we want to complete!

Going forward, I will be writing on various hot keys and keyboard combinations to use. As there is such an enormous wealth of information involved in this uber-Grind Skill, it will take some time to get it all published. For now, I’d like to start with some basics.

Using the Ctrl or Control key**

The Ctrl key is typically used in conjunction with a letter or number to perform a “hot key” function, as in you hold both the Ctrl key down at the same time as a letter to perform the function, thus these combinations will be expressed as Ctrl+[the letter/number]. Many novice computer-users have managed to learn Ctrl based hot key functions like Ctrl+b to bold or Ctrl+c to copy or Ctrl+v to paste. The “hot key” moniker typically applies to using these [Ctrl+]-based keyboard combinations.

Try it out. Open up a text editor of your choice. Use your mouse (It’s ok!) and highlight this sentence. Press Ctrl+c. Go to the text editor and use the mouse to click into the input box: press Ctrl+v. Voila!

Using the Alt key

The Alt key is perhaps the most underutilized keyboard key in existence. Many people know a great deal of the Ctrl-based hot keys, but continue to almost universally ignore the power of the Alt key. In most all programs, the Alt key takes you to the top most menu-bar in an application. You can test this out if you open up Internet Explorer (Referenced here because most all PC users have this application). If you open IE and press Alt, you will then see that you are “up” on the menu bar and the various options on the bar have an underlined letter like File or Edit. What this means is that, once Alt has been pressed (and released!), if you then hit the “f” key, you will expand the “File” menu. From there, new functions will be have a letter underlined. Here, again, pressing the letter will either expand the sub-menu or execute a command. Unlike Ctrl-based functions, Alt-based functions are typically expressed like: Alt, [letter], [letter] whereby the comma means you release the key after each instance.

Try it out. In Internet Explorer, press Alt, a, a. This will open up the “Favorites” dialogue box in IE 7 and allow you to add this page to your Favorites! Yeah, you probably don’t want to bookmark this page, but this gives you an idea. Alternatively, you can hit Alt, f, a to bring up the “Save As” dialogue; again, this is just to demonstrate how the Alt function works.

Unlike Ctrl-based hot keys, which you essentially have to discover on your own, Alt-based keyboard combinations require no memorization. Simply pressing “Alt” will show you what letters will do what, and so on. The beauty of the Alt key is that you can learn new functions all the time simply based on the keyboard combinations you find yourself using the most. The more you need to “Save As,” the more times you’ll hit Alt, f, a. Before long, you’ll be using this keyboard combination without thinking about it and “Save As” will be as fluid a motion as typing the word “cat.”

Again, the key is awareness. The next time you need to perform a function that would require you to take the mouse to the menu bar, try Alt.

Using the Tab key

Tab is not just for indenting. Pressing the tab key will toggle the focus within a window forward to whatever the next input area is. On a website, tab will scroll you through hyperlinks. You can witness this by just holding “tab” down on this window. Combining “tab” with “shift” will do this same process in the reverse direction. Check it out.

The Alt+Tab hot key

The bain of managers everywhere and the salvation of employees is the Alt-Tab key combination. Alt+Tab brings up a window that allows you to toggle between windows open on your destktop.

Try it out. Adding “shift” as in Alt+Shift+Tab will take you through open windows in reverse.

What’s the point? For one, imagine if you are at work but not working — like on reddit.com or perezhilton.com. Your boss comes in. Without Alt+Tab, this could send your right hand scrambling for your mouse and moving for the “X” or minimize button. Comparatively, with Alt+Tab and your left hand still resting on the keyboard, you can quickly toggle back to the “work” screen of choice, like PowerPoint or Excel. Your employer will almost certainly be none the wiser!

Outside of goofing off at work, Alt+Tab is very useful when you have multiple windows open. It can also be used in concert with other hot keys to dramatically amplify your productive. Imagine copying from one program, Alt+Tabbing to another, and pasting all without leaving the keyboard.

That is the power of hot keys and keyboard combinations.

Homework

Clearly replacing mouse-reliance with hot keys and keyboard combinations takes some habit-changing effort on your part. However, you will be amazed at how adding single hot keys and keyboard combinations in a piecemeal fashion can save you truly insane amounts of time.

Going forward

There are numerous ways to use hot keys. In my day-to-day grind, I use hot keys and keyboard combinations most in Gmail, Excel, and more generally in other applications. There are a few more universal hot keys that you should know about (like Alt+Tab). Stay tuned for more!

* Exception is graphic design, and hot keys are still very important here, as well.

** I know that many people are increasingly using Macs, which have an Apple key. Many of the Windows-based hot keys translate over to Macs (as they transfer over to Linux Ubuntu). Perhaps at some point I’ll have a Mac and learn the specific intricacies of this O/S, but for now, this grind skill will be primarily focused on Windows and Windows-based software.

Grind Skills Reading