Digitally Isolated.

A poignant depiction of the iPhone as a prison via Felipe Luchi for Go Outside Brazil.

I keep thinking about being digitally isolated.  What is “digital isolation?” In a nutshell: today we are more connected to anyone/everyone than at any point in history yet (paradoxically) we feel ever more alone. Stranger still, it seems we have chosen this as our preferred mode of existence.  There’s even a joke about it: there are nine ways to reach me on my phone without talking to me; pick one of those.


It’s incredible that we can find people like us all over the world with whom we can connect in a meaningful way about a certain idea, topic, or shared interest. The Internet has made that kind of deep, direct communication a reality and it’s helping people find others who are like them.  With such fantastic connectivity, you’d think people would feel less alone.

On any given random, unusual, defining personality trait or unique(-ish) personal interest, this may be true. However, some data indicates that when it comes to topics of great importance (e.g. how we feel about ourselves, about others, about family, about sex, etc. versus what we think about politics, morals, religion, or our favorite TV show/sports team), it seems we’ve got fewer people to talk to than ever.

Take, for instance, a study published by the American Sociological Review that indicates that from 1985 to 2004, 43.6% of the population reports that they only discuss important matters with either no one or with only one other person. From the study, “The Modal number of discussion partners has gone from three to zero.”  And that was as of 2004!

That’s not good. You see, while the Internet functions fantastically at satisfying our highest level needs — think: our convictions about politics, morals, religion, or just our favorite music, sports team, or TV show — it’s not so great when our human needs become more basic (See: Maslow’s Hierarchy of Needs). Here I’m talking about our sense of place in a community, our effectiveness at connecting to friends and family, and our more basic sense of satisfaction with day-to-day life.

Where is the breakdown?  From what I can tell, it stems from our ability to be too surgical in our interactions with other human beings.  Imagine needing to tell someone you care about (a friend, a boss, a spouse) something that may upset them.  You can tell them over the phone, in person, or over email.  Which method gives you the most control?  I think for most it’d be the email:  we can be very precise in crafting the message, we can shield our emotions, and we can save ourselves from seeing the reaction (one we anticipate will be negative) of the recipient.

Or think about how we interact on social networks like Facebook.  There, we are able to blast out what we’re doing through text, photos, or even video.  While the common complaint about Facebook posts is that the stuff shared is overly mundane (e.g. photos of what we’re eating), the truth of these communications is that they are all overly filtered.

One of the most popular services on Facebook of recent note is Instagram.  Instagram empowers us to apply filters to our photos to make them look better, which given how most camera phone photos look, is a much needed service.  So Instagram’ed photos are hugely popular–so much so that they are cliche.

Instagram is just a more specific version of what Facebook is generally.  On Facebook (or any digital platform), we get to be the editors of who we are to the world.  The result?  What we share about ourselves amounts to being an “Instagram’ed” version of who we are.  It’s polished.  Selective.  Distilled.  And perhaps a little cliche.

It’s not that we never “dress to impress” or filter ourselves to others in “meat space.”  Clearly we do.  It’s just that doing so is so much harder when we’re sharing physical space with someone else.  When we spend time physically with other people, we don’t even have to say anything at all.  An agenda isn’t required.

Being there.

Today, it’s easy to think of the quiet time spent with another person as time we could be spending reading email, checking favorite websites, playing games on our digital devices, or engaging in some other activity (something to fill the voice).  We do think of these micro-instances this way.  More and more we never need to waste time without engaging something or someone somewhere.

When we trade this (ostensibly unused) time with others for distraction, what do we lose?

Whenever something bad happens to a friend or loved one, I often struggle to find something meaningful to say to express my condolences and concern.  I often just say, “I’m here for you if you need anything.”  I want to “be there” for my friends or family.  But what does “being there” mean?

Rather than being some vacuous statement, I think “being there” for someone else is a hugely important part of having a meaningful relationship.  “Being there” is being available–physically and mentally–to embrace the people I care about but it usually only happens if the other party sees me as being available.

Being there  is giving them my time and attention and letting them waste it in silence if they choose.  It’s being vulnerable to them–the power  to spend my time however they choose (and it goes both ways). If they need to burn some time around me in silence that’s fine.  If they need to get something out then being present to that person gives them the chance to scrounge up the courage to have it out.

To date, I don’t know of any way to “be there” for someone else digitally.  Being reachable by 10 ways via 5 computers/tablets/mobile devices is not the same as being available.

(Come to think of it, the nearest technology that gets at availability may be instant messaging where you appear “available” to chat … but that’s still not the same as physically being available to others.

Being alone.

So there are two things at play.  We increasingly choose filtered communications over unfiltered communications thanks to ever-more ways to digitally relate to other people (or distract ourselves) and there’s less and less time spent being present to those we are physically near.

The result?  We feel more alone.  Our digital world isolates us.

And when the going gets tough, we don’t know where to turn or who to talk to because there’s no status update on Facebook for needing a shoulder to cry on right now nor are we comfortable unloading our personal fears and anxieties or even our simple joys to our digital friends–the ones we’ve never met in person who happen to share a fancy for Medieval cooking or hilarious Internet memes on reddit.

This is how it is today in a world where we are ever-more connected to others, we lack the sort of connection that matters most: simply being present to the relationships that matter, even when those interactions are nothing more than wasted time in another’s presence.

The time spent actually being present to the people we care about is never wasted: it’s the opportunity to be real.  To be there.  How do we get back to that?  That’s the kind of connectivity that lasts.

Raw Milk Safer than Salad

A little background

Some of you know that when we our second daughter Raya was around 5 or 6 months old, I started “homebrewing” her formula based on a recipe for raw cow’s milk based baby formula I found at The Weston A. Price Foundation website.  I made this formula for Raya for about six months before we just started giving her straight raw cow’s milk.  Today, and ever since (some seven months later), both our girls continue drinking raw cow’s milk.  I’ll circle back and talk more about that in a minute.

The data shows that raw milk is low risk

The WSJ recently published an article titled New Studies Confirm: Raw Milk A Low-Risk Food.  The studies alluded to were from a presentation given to Canada’s CDC back in mid-May by Nadine Ijaz.  Here’s a clip from the article (emphasis mine):

The reviewer, Nadine Ijaz, MSc, demonstrated how inappropriate evidence has long been mistakenly used to affirm the “myth” that raw milk is a high-risk food, as it was in the 1930s. Today, green leafy vegetables are the most frequent cause of food-borne illness in the United States. British Columbia CDC’s Medical Director of Environmental Health Services, Dr. Tom Kosatsky, who is also Scientific Director of Canada’s National Collaborating Centre for Environmental Health,welcomed Ms. Ijaz’s invited presentation as “up-to-date” and “a very good example of knowledge synthesis and risk communication.”

Quantitative microbial risk assessment is considered the gold-standard in food safety evidence, a standard recommended by the United Nations body Codex Alimentarius, and affirmed as an important evidencing tool by both the U.S. Food and Drug Administration and Health Canada. The scientific papers cited at the BC Centre for Disease Control presentation demonstrated a low risk of illness from unpasteurized milk consumption for each of the pathogens Campylobacter, Shiga-toxin inducing E. coli, Listeria monocytogenes and Staphylococcus aureus. This low risk profile applied to healthy adults as well as members of immunologically-susceptible groups: pregnant women, children and the elderly.

“While it is clear that there remains some appreciable risk of food-borne illness from raw milk consumption, public health bodies should now update their policies and informational materials to reflect the most high-quality evidence, which characterizes this risk as low,” said Ijaz. “Raw milk producers should continue to use rigorous management practices to minimize any possible remaining risk.”

How about that?

Ijaz’s presentation on the myths of raw milk

I did some Googling and was able to find Nadine Ijaz’s blog The Bovine, and from there, a link to her presentation as she presented it, with all the slides, in full.  You can find it here (Run time looks to be about an hour).  Notably, while Ijaz is biased towards regulatory reform in the milk industry, her research was “independent and unfunded.”

I’ve not had the time to watch the full presentation, but one site has already summarized it here.  The presentation is organized around exposing the major myths around raw milk and isn’t limited to the prevailing myth most people believe—that raw milk could make you sick because it’s not pasteurized.  She also tackles some of the more positive (but misguided) notions around raw milk.  Here are the six myths she speaks to (here’s the screenshot from her presentation):

  • Myth #1: Raw milk is more digestible for people with lactose intolerance
  • Myth #2: Enzymes and beneficial bacteria in raw milk make it more digestible for humans
  • Myth #3: Raw milk is shown to prevent cancer, osteoporosis, arthritis, diabetes
  • Myth #4: Raw milk is a high-risk food
  • Myth #5: Raw milk has no unique health benefits
  • Myth #6: Industrial milk processing is harmless to health

I had read some of these (apparently) myths as selling points for drinking raw milk back when I first learned about it.  I certainly am guilty of repeating things about it’s digestibility to friends and family regarding feeding an infant raw milk.  Ijaz’s presentation debunks myths 1-3 as being unsubstantiated, but I’d say that 1-3 are really minor points (Disease/illness prevention certainly is intriguing, but I’d never thought of raw milk as some panacea).

Moving past these first three myths, you get to the meat of Ijaz’s presentation—that raw milk is low risk.

Raw milk is less risky than salad (Should we ban the sale of leafy greens?)

I love these two slides (around 121):


Just this year a U.S. CDC study has said that green leafy vegetables (a.k.a. salad greens like lettuce, spinach, kale, among other things) are the most frequent cause of foodborne illness in the United States causing 20% of all cases from 1998 to 2008.

Note that back in 1938, 25% of U.S. foodborne outbreaks were attributed to raw milk; however, today, 1-6% of foodborne outbreaks across industrialized nations are attributed to all dairy products (pasteurized or not) (per slide 103).

In short, isn’t the takeaway here that you’re almost at a larger risk for getting sick eating raw vegetables than you are for drinking raw milk?

The benefits outweigh the (low) risk

One of the things that raw milk apparently is good for according to some 8 cross-sectional and 2 cohort studies from 2001-2010, there is evidence that raw milk consumption may reduce asthma and allergy in young children.  Most recently, a 2011 study called the GABRIELA study that took data on some 8,000 school-aged children found (per slide 157) an independent protective effect of raw farm milk on development of asthma, allergy and hay fever.  Just how much protection?  Reduction by approximately half.

Given how pervasive allergies seem to be these days among children, this seems like a pretty huge reason to give your kids raw milk.

And more

Ijaz had even more to share in her presentation and if you don’t have time to give it a listen (I didn’t), scan Wellness Tips’ summary. Here’s a quote I’ll leave you with:

It is scientifically reasonable for people, including pregnant women and parents of young children to choose hygienically produced raw milk over industrially processed milk, whether or not they heat it themselves afterwards. It is not scientifically justifiable to prohibit people, including pregnant women or parents of young children from choosing to seek out an important food which may effectively prevent allergy and asthma.

Nadine Ijaz, MSc.

My own experience with raw milk and raw milk formula

I personally don’t drink milk be it raw milk or otherwise. I do consume dairy in the form of yogurt and cottage cheese, but that’s another story.

However, my daughters are definitely on the raw milk train and have been now for a year (for my youngest) and about a half a year for my oldest. While I don’t know if they’ve benefited from allergy or asthma avoidance, both are in daycare and both are exposed (As a result) to a lot of human born pathogens.

I wish I could say they’ve never gotten sick during that time, but it’s just not so.  I will say that Raya’s teachers, while being curious (but accepting) of her drinking my homemade raw cow’s milk formula, remarked on how infrequently she was sick relative to other kids in her class.  Raya was born about a month before her due date, spent a few days in the NICU while her lungs shed fluid after she was born, and was immediately on antibiotics.

Part of me felt like these circumstances could be setbacks developmentally to my kid.  I think it’s the main reason I made the commitment to give her a better formula once she went off breast milk.  But it wasn’t the only reason.  As someone who has spent an inordinate amount of time learning about nutrition, I just couldn’t find a good option for baby formula.  I also wanted my wife to feel good about her transitioning to formula, so I wanted Raya’s primary food source to be really healthy.

The thing is: mass produced baby formulas just don’t seem all that healthy.  Take a look at the baby formula available on the shelf at your local grocer and you’ll find there’s a lot that’s disconcerting.  There are weird ingredients that, while they may not be bad per se, I don’t understand.  Even some of the organic, milk-based formulas still soy oil.  There’s questionable sugars (And God forbid you use a soy-based formula).  Hardly any have any probiotics in them.  And anything that is shelf-stable for months, well, it may not kill your kid but it’s very likely not ideal.

Given all this, it’s hard not to step back and reflect, “Maybe my infant child deserves healthier food—particularly since this is the only  food they’re going to be eating.”  And more to the point: the convenience of buying store-bought, ready-to-mix formula isn’t a good reason to shortchange my daughter’s health if I can help it.

So despite it meaning I had to make her formula about 3X a week, with each batch taking probably about 20 minutes and requiring weekly trips to a farmer’s market to get “Dairy Pet Aid” from a Tennessee farm that made deliveries to Georgia, I opted to make Raya’s formula.   And I’ve never seen any drawbacks from that decision.

And as we go forward, we continue to get raw milk that’s apparently only suitable for pets to drink.  Isn’t it strange that it’s legal to buy healthier food for our pets than our kids?

Moving forward

Hopefully, Ijaz’s work with the CDC in Canada will trickle down south to the United States.  If nothing else, increased awareness through publications like the WSJ should help moo-ve the needle in the right direction.

And if you’re not already plugged into a farm that can provide you with raw milk (or cheese!) or grassfed beef or pastured poultry, what are you waiting for? The only way our food supply is going to get better is if you make an effort — and it only takes a small effort — to buy better food. In our house, while we’ve taken many steps towards more organic, more local, healthier food, we still shop at Publix, Kroger, and Costco.  A step in the right direction isn’t an all or nothing proposition.

Do what you can and build on it over time (if you can!)—that’s what I do.

Why I Didn’t (And Don’t) Vote

Folks who know me know I don’t vote. Many roll their eyes at this decision. Others awkwardly skirt around it preferring to avoid asking why I don’t vote (I don’t usually go into why unless prompted). And most people just assume I do vote. Of all my friends and family and coworkers, I’m unsure how many have given a second thought to the act.

Do they simply accept the rhetoric that it’s some moral imperative to vote? That it’s a duty? That it’s a right you must exercise to preserve?

I don’t know.

And here I’m talking about incredibly smart people, which is to say (call me an elitist if you must) that the population at large probably has never given a second thought to the dogma supporting the act of voting. And I’ll just sidebar and say that a huge swath of U.S. citizens actively choose to skip the vote. Why? Are they weak in character? Is it simply not worth their time? I’m sure the reasons are many — my reasons are.

Back to my much smaller network, the reality stands: it’s so much taboo to talk about voting that it usually never gets talked about.

But I just read a great write-up that succinctly captures the main reasons I don’t vote. It’s over at Strike the Root and written by Carl Watner (and was pointed out to me by Patri Friedman via G+) and starts with a quote from Henry David Thoreau, followed by Watner’s main four points on why he chooses not to vote:

How does it become a man to behave toward this American government to-day? I answer that he cannot without disgrace be associated with it . . . . What I have to do is to see, at any rate, that I do not lend myself to the wrong which I condemn.

— On The Duty of Civil Disobedience (1849), Henry David Thoreau

[Watner on non-voting]

Truth does not depend upon a majority vote. Two plus two equals four regardless of how many people vote that it equals five.

Individuals have rights which do not depend on the outcome of elections. Majorities of voters cannot vote away the rights of a single individual or groups of individuals.

Voting is implicitly a coercive act because it lends support to a compulsory government.

Voting reinforces the legitimacy of the state because the participation of the voters makes it appear that they approve of their government.

My three year old daughter is already being taught in her daycare about the importance of voting. She admonished me (seriously) for not voting last night. I’m not overly bothered by this and I calmly told her I don’t vote because I don’t like controlling the lives of other people. I asked her if she liked being told what to do but I’m afraid it’s a bit too early for that rebuttal.

When the day comes that I can explain to a more receptive (patient!) ear, I think Watner’s list here may resurface. So it is that I’m keeping it here for future reference.

I don’t vote. I didn’t vote. Why? There are so many reasons and these are but a few.

How many reasons do you really need?

What’s Lost in Outsourcing your Life?

David D. Friedman had a thought-provoking post over over the weekend — Middlemen, Specialization and Birthday Parties. Therein he talks about how specialization and division of labor have allowed for us to cheaply outsource various aspects of our lives that were formerly almost necessarily DIY. Below is an example I can relate to now that I’m living as a parent in my own era of kid’s birthday parties — and note Friedman’s reaction (second paragraph):

This afternoon I attended my grandson’s birthday party. It was held at a facility obviously designed for holding children’s parties. The entertainment, preceded by a safety video, consisted of playing on and in large inflatable structures—slides, a bouncy room, an obstacle course. That was followed by cake and pizza, after which everyone went home, the birthday boy accompanied by a bag of unopened presents.

Looking at it as an economist, it is clear that the change from then to now represents an increased use of the division of labor, something that, as an economist, it is hard for me to object to. And yet I do, and I do not think the reason is entirely a conservative preference for the way things used to be. For somewhat similar reasons, I find having guests over for dinner a different, and better, practice than taking them out to a restaurant. Homes have an emotional dimension to them. To invite someone into your home, whether an adult colleague or a child’s friend, is to some small degree to treat him as part of your family.

I tend to agree. I also can’t help but think there’s a tie-in here to buying a friend a gift card rather than a gift. Sure, giving a friend cash or a gift card is like saying, “Hey, I know that no one knows what you want better than you do, so you make the choice and get what you want!” The reality, for me anyway, is that gift cards make the gift, on some level anyway, work. Now I have to remember to allocate the cash to buying something. I have to remember I have the gift card. Whereas getting a gift in hand is more of a risk on the gifter, it also is more personal. I learn something about the giver in the process. I also recognize that the giver sacrificed their time to pick the gift out. These little things somehow add a lot of depth to the experience — at least they do for me.

I like Friedman’s mention of having friends over to your house in lieu of going out to eat. While we haven’t intentionally engaged in a preference of eating-in with friends over eating-out with them, we probably do it about 50% of the time. Going to have to try to bump that up, if possible. Recently, we’ve done a lot more takeout for these meals with friends, which helps some with the work, but still keeps the gathering more personal and relaxed. I’d never really thought about how you lose some of the emotional dimension by eating out at restaurants instead of in your house. It’s a great point.

Overall, I feel like friendships and family-friendships are incredibly hard to build these days. Everything seems over-formalized, requiring lots of advance planning to get together with other couples and couples with kids. “Playdates” are a normal thing now, which just strikes me as a little bizarre.

Anyway, I’m digressing from Friedman, so I’ll stop. Anyone else feel like he does? Like I do?

Is “Follow your passion” Good Advice? Probably not.

The problem isn’t that we’re all aimlessly trying to find our passion, it’s the misguided expectations that:

  1. we know what we want (or think will make us happy)
  2. that there’s an easy, quick way to get it (or that the road to acquiring it will be nothing but fun)

Or put it another way — how often have you known in advance, before trying a type of food you’ve never had before, whether or not you’d like it? You can’t know what you don’t know. How can you know what work you will enjoy doing before you’re doing it? How do you stumble onto that work? How do you get enough expertise to have the opportunity to do that sorta work?

Check this article: Solving Gen Y’s Passion Problem

I also can’t help but think this also applies to commitments to friends and significant others. Working hard at making a relationship function over the long haul isn’t easy and it takes commitment, a willingness to learn and grow as a person, and a number of other things. But doing it can lead to … more passion. But you gotta have realistic expectations and goals.

Let’s stop romanticizing reality. Life is not romantic.


It’s this final implication that causes damage. When I studied people who love what they do for a living, I found that in most cases their passion developed slowly, often over unexpected and complicated paths. It’s rare, for example, to find someone who loves their career before they’ve become very good at it — expertise generates many different engaging traits, such as respect, impact, autonomy — and the process of becoming good can be frustrating and take years.

The early stages of a fantastic career might not feel fantastic at all, a reality that clashes with the fantasy world implied by the advice to “follow your passion” — an alternate universe where there’s a perfect job waiting for you, one that you’ll love right away once you discover it. It shouldn’t be surprising that members of Generation Y demand a lot from their working life right away and are frequently disappointed about what they experience instead.

The good news is that this explanation yields a clear solution: we need a more nuanced conversation surrounding the quest for a compelling career. We currently lack, for example, a good phrase for describing those tough first years on a job where you grind away at building up skills while being shoveled less-than-inspiring entry-level work. This tough skill-building phase can provide the foundation for a wonderful career, but in this common scenario the “follow your passion” dogma would tell you that this work is not immediately enjoyable and therefore is not your passion. We need a deeper way to discuss the value of this early period in a long working life.

Weight Gain from Forced Overeating has Limits

It seems the only things I can find time to blog on these days are posts from Peter of Hyperlipid. I’ve whittled down the number of blogs I follow that cover nutrition — just not enough time in a day — but Peter’s is fun to read if not a bit “in the weeds.” Peter can go in depth on scientific studies and the chemistry of metabolism, mitochondria, insulin, etc., but he almost always has a way of distilling that information in a way that I can not only make sense of it, but take away some insight. If you are interested in what drives obesity, eating, etc., Peter’s blog is one of the best around. And if you’re not buying the whole “Reward Hypothesis” of obesity that is being trumpeted by Stephen Guyunet (or are at least skepitcal of it — I think Guyunet is off track here; Todd Becker’s theories make a more holistic, coherent case that strings together the behavioral aspects of obesity like reward and the insulin — required reading of his here and here and here).

That’s a big digression from why I’m blogging at all, which is to highlight Peter’s latest, which looks at a study that overfed a few lean individuals by 2000 calories/day and measured their fat gain. What happened? They gained weight, but not as much as you’d predict based on a pure calories in/calories out energy balance theory obesity. Given they were overeating on highly rewarding foods like Snickers bars and chocolate milk, this would seem to fly a bit in the face of the Reward theory of obesity. What’s going on? Well, you have to read Peter, but here’s a taste:

Ah, but if insulin stores fat, why should the level of insulin fall progressively during a sustained hypercaloric eating episode? Surely you must need insulin to store those extra calories? In fact, as insulin levels fall, so does the rate of fat storage. The chaps gained, from Table 3, 1kg of fat mass in the first week and only 0.5kg of fat in the second week… Oh, I guess this must be because the subjects either (a) sneaked off to the gym in the second week or (b) flushed their Snicker Bars down the loo in the second week, without passing them through their gastro intestinal tract first (good idea!) or (c) got bored with Snickers and stopped finding them rewarding. And of course they disconnected their Actiheart monitors at the gym.

Otherwise how you can eat 2000kcal over your energy expenditure, equivalent to nearly 200g of fat gain per day, and gain a kilo of fat in the first week, then continue to eat an excess 2000kcal/d for a second week and only gain half a kilo of fat? Calories in, calories out, you know the rules. Hmmm, in the second week there are 14,000 excess calories-in, 5,000 stored, very interesting. …

[So what is going on here? …]

The mitochondria say they have too many calories. It’s easy for mitochondria to refuse calories from glucose by using insulin resistance, working at the whole cell level. In the presence of massive oral doses of glucose this must elevate insulin to maintain normoglycaemia. The elevated insulin diverts calories from dietary fat in to adipocytes, away from muscle cells. And inhibits lipolysis at the same time … So insulin goes up to maintain normal blood sugar levels, overcomes insulin resistance to run cells on a reasonable amount of glucose and shuts down FFA release to counterbalance its action in facilitating the entry of glucose in to cells.

Core to this is (a) there is no hyperglycaemia, insulin still successfully controls glucose flux and (b) insulin inhibits lipolysis. So you store fat. These subjects are both young and healthy. They do not have insulin resistant adipocytes, mitochondrial damage or a fatty liver. The system works as it should.

As time goes by fasting insulin levels fall and weight gain slows. Calorie intake doesn’t drop. The only plausible explanation is that the subjects generate more heat and radiate that heat during the second week of the study.

Important to the above study was that it was conducted on healthy, non-overweight individuals, so their metabolic systems working “as they should” would make sense — basically, their bodies ultimately start resisting weight gain despite overeating. They heat up and radiate off extra calories.

This makes a lot of sense to me based on my personal experiences overeating—anecdotally it seems to hold water (no glycogen storage = water retention pun intended I swear). Earlier this year I bumped up my caloric intake by at least 4K calories/week (maybe more). Weight gain was incredibly, surprisingly slow after the initial water weight gain as I went from a glycogen-depleted (less water retention) weight to a glycogen-replenished (more water retention weight). It was surprising to me that I didn’t gain more weight overeating, but what I noticed in the process was that I felt like I had more energy, felt warmer, more likely to be active, etc. — this is as compared to alternate day up-day, down-day eating (with a net deficit or close to it) a la I literally felt my body burning off at least some of the caloric excess.

Meanwhile, I paid particular attention to the fact that dietary fat providing calories in excess of my needs would tend to be stored (and made the assumption that excess carbohydrates aren’t easy to turn into fat) so I tended to overeat on carbs and not fat. I think that kept fat gain in check, too.

I’ll wrap with a thought about bodybuilders and a popular diet they use to gain weight — it’s GOMAD, which stands for “Gallon of milk a day.” Nearest I understand it is that you should basically eat your regular diet and then drink a gallon of milk (I believe whole milk is prescribed). The milk provides something like 2,400 of excess calories per day. That’s nearly 17K a week, or if that turned only into fat, it’d be a bit less than 5 lbs./week. I don’t know if this is true or not, but I doubt bodybuilders doing GOMAD actually gain 5 lbs./week. But why even bother with such a huge caloric excess in the first place? Probably because bodybuilders have found — via trial and error and a lot of self-experimentation — that in order to get mass gains (muscle, not just fat), they have to massively overeat to force the body to grow.

Seems our bodies are homeostatic systems and like to keep the status quo. Go figure. Ok, that was probably fascinating to 1% of you and likely .0000001% of the Internet, but I found it fascinating.

Thanks, Peter!

Compensating for Broken Fat Cells

When it comes to reading about the metabolic effects of eating a high fat diet (With low fat and low carbohydrate, in turn), I turn to Peter’s wonderful Hyperlipid. I was catching up on Reader the other day when I saw this post about broken mice. It’s a bit esoteric so be warned, but there’s an idea therein that I find particularly interesting — it pertains to mice with broken metabolisms.

The result is the following:

They develop neuronally mediated acute insulin hypersensitivity in their adipocytes, they then abnormally store fat at low levels of insulin, increase eating to compensate for this calorie loss in to adipocytes and eventually develop adipocyte distention induced insulin resistance, which shows as metabolic syndrome.

If I might try to distill the above, what I take from it is that these mice have a totally screwed up insulin response in their fat cells, causing them to snag up whatever dietary fat is present (taking it out of the pool of energy available to the body). If you’re eating a decent amount of carbs or protein (both increase circulating insulin), the net effect will be overeating as your fat cells soak up any accompanying dietary fat (and perhaps a bit of converted fat), effectively “starving” your other cells (you get hungry as a result). This works until it breaks (the fat cells get too big and can’t take on more nutrients). Once broken, the body can’t deal with excess energy and starts failing (metabolic syndrome).

This all reminds me of Gary Taubes’ analogy to the plugged bathtub filling up with water until the water pressure gets high enough in the tub to push through the clogged drain; and when/if this stops working, the water goes over the sides of the tub (metabolic syndrome). The tub is your fat. Something like that.

Gamers Solve Problem that Dogged Researchers for Decades

“Online gamers have achieved a feat beyond the realm of ‘Second Life’ or ‘Dungeons and Dragons’: they have deciphered the structure of an enzyme of an AIDS-like virus that had thwarted scientists for a decade.”

Awesome. The takeaway I see in this approach is evocative of self-experimentation. Gamers are basically lots of little experimenters. Perhaps gamers succeeding where scientists had failed for a decade is less about playing a (purposeful) game for fun and more about simply having enough people iterating on the problem with a basic incentive to succeed (it’s fun).

Read more:

Online gamers crack AIDS enzyme puzzle


Figuring out the structure of proteins is vital for understanding the causes of many diseases and developing drugs to block them.
But a microscope gives only a flat image of what to the outsider looks like a plate of one-dimensional scrunched-up spaghetti. Pharmacologists, though, need a 3D picture that “unfolds” the molecule and rotates it in order to reveal potential targets for drugs.
This is where Foldit comes in.
Developed in 2008 by the University of Washington, it is a fun-for-purpose video game in which gamers, divided into competing groups, compete to unfold chains of amino acids – the building blocks of proteins – using a set of online tools.
To the astonishment of the scientists, the gamers produced an accurate model of the enzyme in just three weeks.
Cracking the enzyme “provides new insights for the design of antiretroviral drugs”, says the study, referring to the medication to keep people with the human immunodeficiency virus (HIV) alive.
It is believed to be the first time that gamers have resolved a long-standing scientific problem.
“We wanted to see if human intuition could succeed where automated methods had failed,” Firas Khatib of the university’s biochemistry lab said in a press release.
“The ingenuity of game players is a formidable force that, if properly directed, can be used to solve a wide range of scientific problems.”
One of Foldit’s creators, Seth Cooper, explained why gamers had succeeded where computers had failed.
“People have spatial reasoning skills, something computers are not yet good at,” he said.
“Games provide a framework for bringing together the strengths of computers and humans. The results in this week’s paper show that gaming, science and computation can be combined to make advances that were not possible before.”

Failure to Move is the State of Paralysis

I keep returning to the idea of action (doing) over inaction (thinking). I also have been likening doing vs. thinking as similar to producing vs. consuming.   The problem with the consumption/production dichotomy is that the lines aren’t always clear as to which is which.  Sometimes you have to consume to produce.

Things I consume:

  • food/energy/time (necessary consumption)
  • blogs/books/tweets/email (some necessary, some unnecessary)
  • television (almost entirely unnecessary)

Things I produce:

  • blog posts/emails/ideas (derivative of consumption)
  • work/research/analysis (requires consumption)
  • art
  • well-being

What I mean by producing “well being” is that I create satisfaction through expending effort.  It seems that production takes effort.  I have to push my body through the mild discomforts of squatting 275 lbs. to have the satisfaction (as strange as it is) of a fatigued body.  I have to work through the mental gymnastics of writing out my thoughts to create a blog post.  I have to gather data and cajole understanding to create analysis.  It takes work.

Production has costs.

But perhaps the greatest cost of production is breaking the inertia of not doing anything at all.  Or worse still, imagining all the things you could (should) be doing but never doing any of them.  Not only does all of this low-grade effort fail to produce anything at all, it also reinforces thinking over doing.  It habitualizes inaction.  It amplifies the inertia.

This is why failure to move is the state of paralysis.  It’s a tautology, but it also boils down inaction to it’s most basic component: not doing.

I’ve  been thinking about this lately because I have so many ideas bubbling around in my head, most of which could be “big.”  And it’s that notion that these ideas have huge potential that makes me fear screwing them up.  Meanwhile, by nature of being “big,” they also have explicit costs.  I can very easily envision how much work they will take to make them succeed.  And wouldn’t you know it?  The more I think about them, the harder it becomes to act on them.

And like all productive efforts, all I have to do to break the state of paralysis is to move.

It is that simple.

How We Get Good at Something

It takes mundane, often boring, always repetitive practice. And often a whole lot of it. We learn by doing and not by thinking.

Watch this short creative take about Ira Glass’s advice on storytelling:

This strikes me as relevant to mastering any skill, and reminds me of George Leonard’s “Mastery” (a bit of a summary of Mastery can be found by Todd Becker, who prompted me to read Mastery in the first place — it’s a quick, inspiring/challenging book).

Watching that video reminds me of how I “became an artist.” I did a lot of art/cartooning as a kid and people would say to me, “You’re talented.” Being an artist was then, and still is today, looked at as some sort of “gift” bestowed from the heavens (and or my genetics). I’ve never believed this personally though.

How I became an artist was much simpler: I kept trying to copy the cartoon image of Super Mario over and over and over again, doing it better and better each time. I remember doing it 20-30 times one night for my classmates in maybe 1st grade. What I didn’t realize at the time was that I was inadvertently practicing how to copy something I saw with my eyes and put it down onto paper. Without any prompting or structured learning from parents or teachers, I trained myself as a five or six year old to draw cartoons.

This is the lunchbox that made me an artist:

A vintage plastic Aladdin Super Mario Bros. Lunch box - this is exactly what I used for lunch in early elementary school.
A vintage plastic Aladdin Super Mario Bros. Lunch box - this is exactly what I used for lunch in early elementary school.

This is how we learn: practice, perseverance, stumbling, and trial and error.