The Democratization of Content

Prerequisite.

Benedict Evans has two thoughtful articles out about content creation versus consumption (and how mobile versus PC relates to the two) and the end of “Content is King.” If you follow Evans on Twitter (and you must if you are at all interested in macro-tech trends, Amazon, Google, Facebook, Apple, etc.), you’ll find both of these articles put lots of words behind ideas he’s been brooding on for some time.

Mobile for creation.

I took two major takeaways from Evans’ articles. The first is that the argument that PCs are for creation whereas Mobile devices for consumption is incorrect. Relevant quote (emphasis mine):

So, 100m or so people are doing things on PCs now that can’t be done on tablets or smartphones. Some portion of those tasks will change and become possible on mobile, and some portion of them will remain restricted to PCs for a long time. But there are another 3bn people who were using PCs (but mostly sharing them) but who weren’t doing any of those things with them, and are now doing on mobile almost all of the stuff that they actually did do on PCs, plus a lot more. And, there’s another 2bn or so people whose first computer of any kind is or will be a smartphone. ‘Creation on PC, consumption on mobile’ seems like a singularly bad way to describe this: vastly more is being created on mobile now by vastly more people than was ever created on PCs.

The logic of the above is irrefutable. I think Evans is correct.

Why does it matter? Because Mobile is about both creation and consumption. And on the creation side, Mobile has made it incredibly cheap to create content. A quick spin through Instagram or Snapchat and you’ll be inundated with a massive pile of content that was created on Mobile phones. Most of the use of Twitter or Facebook is on Mobile and while much of that is consumption, a large portion of it is also creation. Sharing ideas is content. And it all happens on Mobile.

Unfettered access to content.

The second article by Evans — Content isn’t King — explains how content, which Evans explicitly refers to as music, books, and TV, has ceased being important for the tech industry. Evans writes, “Content and access to content was a strategic lever for technology. I’m not sure how much this is true anymore.” A few thousand words of thoughtful explanation later, he concludes:

The tech industry has been trying to get onto the TV and into the living room since before the consumer internet – the ‘information superhighway’ of the early 1990s was really about interactive TV, not the web. Yet after a couple of decades of trying, the tech industry now dominates the living room, and is transforming what ‘video’ means, but with the phone, not the TV. The reason Apple TV, Chromecast, FireTV and everything else feel so anti-climactic is that getting onto the TV was a red herring – the device is the phone and the network is the internet.

Put differently, the Internet provides ubiquitous access to content and doesn’t play well with owned models of distribution (e.g. traditional channels like TV networks, cable, music labels, book publishers, etc.).

What does this mean? Expanding on Evans’ analysis.

Both of the macro trends Evans discusses are intimately related. It’s the interplay between these two trends where things get interesting.

Too much content.

Mobile has resulted in massive piles of content being created. Photos, video, tweets, sharing (sharing is derivative creation) — it’s all content created on Mobile devices. The supply of this “amatuer” form of content is growing every second.

On the other side of the content spectrum is “professional” content. Professional content is almost always the kind of stuff that you can’t create easily on Mobile. Like amateur content, the supply of professional content is growing, as well, but at a somewhat slower rate — likely because it’s growing from a much smaller base of creators. Regardless, professionally created music is cheap and plentiful. As far as long-form content, there’s far too much to read, whether books, news, or blog posts. Any number of streaming services exist to supply you with endless video—from YouTube to Netflix, HBO to Hulu. Or use Kodi. Professional content may not be cheap to produce (like amateur content), but it’s now (relative to the past) cheap to consume — just like amateur content.

I lack a fancy stat, but it seems self-apparent that we live in a time — right now — when there has never been more content to consume. There’s too much content, and really, of the content to which we have access, much of it (so more than we can possibly consume) is very, very good.

The supply of content has affected how much we are willing to pay for it. We can only watch so many shows, swipe so many photos, read so many books, and listen to so much whatever.

Content isn’t king because it’s been democratized by the Internet. “The device is the phone and the network is the Internet.”

Is there a place for PC-created content?

While creation of content on Mobile is growing and the quality is improving, PC-based content creation still matters. Why?

Evans writes at length about how little creation happens only on PCs. And he’s right. However, what I think is missing from his discussion is consideration for the kind of content created on PCs. PCs (still) allow for a certain quality of creation that Mobile devices can’t (yet) match. What more, the use of PCs selects for content creators who are more likely to be experts and are more likely to create higher caliber content.

How many books have been written on a Mobile device? What percentage of the most-watched vloggers are exclusively using their Mobile devices to edit their videos? What device did Evans use to write his articles?

So while the democratization of content creation matters, it does not mean all content that is created is of equal quality. The highest-caliber creators will still gravitate toward tools that allow them to do more nuanced things or more complicated tasks — the demands of their craft require more capable tools. That matters — at least for now.

Live Experiences vs. Recordings — What We Pay to Engage Attention

David D. Friedman recently posed the question of whether or not YouTube recordings of his lectures were adequate substitutes for the live experience. DDF wrote:

Quite often, when I give a talk, someone records it, often as video, and webs it. That raises a question relevant to what talks I give: Is watching the video a reasonably close substitute for attending the talk? …

This links to a question that has puzzled me for a long time. One common pattern in schooling is the mass lecture—a professor speaking to an audience in the hundreds with students taking notes.  In the fourteenth century, that made a lot of sense as a low cost way of spreading knowledge, but why did it survive the invention of the printing press?

For me, this question firmly centers itself upon the subject of attention, and how much attention people are willing to give a recorded experience that is given away for free (“Free” being simply defined, ignoring non-trivial opportunity costs of consuming the content).

Many chimed in with thoughts on the matter, but I didn’t find anyone directly tackling the issue from the perspective of engaged attention. These days when our attention comes at a heavy price (something I will write more on), engagement is everything.

So I left the comment below.

… I think it’s all about attention. More specifically, it’s about how much I am willing to engage in a live experience vs. a recorded experience. There are a few things to unpack here:

  • Upfront costs. Almost all live experiences have non-trivial costs associated with them. You have to get to class. You have to pay tuition. You have to adjust to the environment (that is, listen up, direct focus). When we “pay” for a live experience, we are more likely to feel like we’re wasting our time if we don’t pay attention. In a way, note-taking is arguably yet another way to force paid attention. I don’t think I’ve ever taken notes while watching a recorded lecture or talk.
  • Perceived value of a live experience. Live experiences are necessarily more unique rare than recorded ones that can be watched at will (at low cost). Mind, this isn’t binary: a 3x/week live mass lecture to 500 students will have a lower perceived value than a one-off lecture by a guest speaker to the same 500 students; or 3x/week lecture given to 20 students. Every scenario you can imagine will signal things about value. Can a lecture that is free (b/c it’s a recording) really be worth watching? Of course! But the signals to interpret value are going to be derived from other aspects — views received, expected content, production quality, vouched value by others who’ve consumed it, and so on and so forth.
  • Optionality of the live experience. There must be some non-zero potential value of attending a live experience. This can be the opportunity to ask questions of the presenter, the chance of meeting other like-minded people in attendance, and the value of being able to discuss the lecture with other attendees when the content is fresh on their minds.
  • The value of interpersonal connection. This one is probably a little related to perceived value, but what happens when a lecturer can look you in the eye? How does that engage your attention? Related: video conferencing with one other person is incredibly less enjoyable/valuable (to me) that in person communications.
  • What else? I’m sure there are other reasons we engage our attention more with a live experience as compared to a recording.

Today with the sky-rocketing volume of “free” content, I find I’m resorting to many new signals as to the value of content and whether or not it’s worth my attention. Recorded experiences are great, but they suffer from harder to read signals as to quality. To make matters worse, when I consume this content, the medium of consumption makes it trivial to abandon the recording (either literally closing it or letting my mind wander off).

All said, I wonder to what extent better VR will mitigate some of these negative (and in my mind, undesirable) effects of recorded content by simply engaging more senses and increasing the price of shifting my attention away (due to having to take “goggles” and headphones off).

Raw Milk Safer than Salad

A little background

Some of you know that when we our second daughter Raya was around 5 or 6 months old, I started “homebrewing” her formula based on a recipe for raw cow’s milk based baby formula I found at The Weston A. Price Foundation website.  I made this formula for Raya for about six months before we just started giving her straight raw cow’s milk.  Today, and ever since (some seven months later), both our girls continue drinking raw cow’s milk.  I’ll circle back and talk more about that in a minute.

The data shows that raw milk is low risk

The WSJ recently published an article titled New Studies Confirm: Raw Milk A Low-Risk Food.  The studies alluded to were from a presentation given to Canada’s CDC back in mid-May by Nadine Ijaz.  Here’s a clip from the article (emphasis mine):

The reviewer, Nadine Ijaz, MSc, demonstrated how inappropriate evidence has long been mistakenly used to affirm the “myth” that raw milk is a high-risk food, as it was in the 1930s. Today, green leafy vegetables are the most frequent cause of food-borne illness in the United States. British Columbia CDC’s Medical Director of Environmental Health Services, Dr. Tom Kosatsky, who is also Scientific Director of Canada’s National Collaborating Centre for Environmental Health,welcomed Ms. Ijaz’s invited presentation as “up-to-date” and “a very good example of knowledge synthesis and risk communication.”

Quantitative microbial risk assessment is considered the gold-standard in food safety evidence, a standard recommended by the United Nations body Codex Alimentarius, and affirmed as an important evidencing tool by both the U.S. Food and Drug Administration and Health Canada. The scientific papers cited at the BC Centre for Disease Control presentation demonstrated a low risk of illness from unpasteurized milk consumption for each of the pathogens Campylobacter, Shiga-toxin inducing E. coli, Listeria monocytogenes and Staphylococcus aureus. This low risk profile applied to healthy adults as well as members of immunologically-susceptible groups: pregnant women, children and the elderly.

“While it is clear that there remains some appreciable risk of food-borne illness from raw milk consumption, public health bodies should now update their policies and informational materials to reflect the most high-quality evidence, which characterizes this risk as low,” said Ijaz. “Raw milk producers should continue to use rigorous management practices to minimize any possible remaining risk.”

How about that?

Ijaz’s presentation on the myths of raw milk

I did some Googling and was able to find Nadine Ijaz’s blog The Bovine, and from there, a link to her presentation as she presented it, with all the slides, in full.  You can find it here (Run time looks to be about an hour).  Notably, while Ijaz is biased towards regulatory reform in the milk industry, her research was “independent and unfunded.”

I’ve not had the time to watch the full presentation, but one site has already summarized it here.  The presentation is organized around exposing the major myths around raw milk and isn’t limited to the prevailing myth most people believe—that raw milk could make you sick because it’s not pasteurized.  She also tackles some of the more positive (but misguided) notions around raw milk.  Here are the six myths she speaks to (here’s the screenshot from her presentation):

  • Myth #1: Raw milk is more digestible for people with lactose intolerance
  • Myth #2: Enzymes and beneficial bacteria in raw milk make it more digestible for humans
  • Myth #3: Raw milk is shown to prevent cancer, osteoporosis, arthritis, diabetes
  • Myth #4: Raw milk is a high-risk food
  • Myth #5: Raw milk has no unique health benefits
  • Myth #6: Industrial milk processing is harmless to health

I had read some of these (apparently) myths as selling points for drinking raw milk back when I first learned about it.  I certainly am guilty of repeating things about it’s digestibility to friends and family regarding feeding an infant raw milk.  Ijaz’s presentation debunks myths 1-3 as being unsubstantiated, but I’d say that 1-3 are really minor points (Disease/illness prevention certainly is intriguing, but I’d never thought of raw milk as some panacea).

Moving past these first three myths, you get to the meat of Ijaz’s presentation—that raw milk is low risk.

Raw milk is less risky than salad (Should we ban the sale of leafy greens?)

I love these two slides (around 121):

raw_milk_vs_salad_greens

Just this year a U.S. CDC study has said that green leafy vegetables (a.k.a. salad greens like lettuce, spinach, kale, among other things) are the most frequent cause of foodborne illness in the United States causing 20% of all cases from 1998 to 2008.

Note that back in 1938, 25% of U.S. foodborne outbreaks were attributed to raw milk; however, today, 1-6% of foodborne outbreaks across industrialized nations are attributed to all dairy products (pasteurized or not) (per slide 103).

In short, isn’t the takeaway here that you’re almost at a larger risk for getting sick eating raw vegetables than you are for drinking raw milk?

The benefits outweigh the (low) risk

One of the things that raw milk apparently is good for according to some 8 cross-sectional and 2 cohort studies from 2001-2010, there is evidence that raw milk consumption may reduce asthma and allergy in young children.  Most recently, a 2011 study called the GABRIELA study that took data on some 8,000 school-aged children found (per slide 157) an independent protective effect of raw farm milk on development of asthma, allergy and hay fever.  Just how much protection?  Reduction by approximately half.

Given how pervasive allergies seem to be these days among children, this seems like a pretty huge reason to give your kids raw milk.

And more

Ijaz had even more to share in her presentation and if you don’t have time to give it a listen (I didn’t), scan Wellness Tips’ summary. Here’s a quote I’ll leave you with:

It is scientifically reasonable for people, including pregnant women and parents of young children to choose hygienically produced raw milk over industrially processed milk, whether or not they heat it themselves afterwards. It is not scientifically justifiable to prohibit people, including pregnant women or parents of young children from choosing to seek out an important food which may effectively prevent allergy and asthma.

Nadine Ijaz, MSc.

My own experience with raw milk and raw milk formula

I personally don’t drink milk be it raw milk or otherwise. I do consume dairy in the form of yogurt and cottage cheese, but that’s another story.

However, my daughters are definitely on the raw milk train and have been now for a year (for my youngest) and about a half a year for my oldest. While I don’t know if they’ve benefited from allergy or asthma avoidance, both are in daycare and both are exposed (As a result) to a lot of human born pathogens.

I wish I could say they’ve never gotten sick during that time, but it’s just not so.  I will say that Raya’s teachers, while being curious (but accepting) of her drinking my homemade raw cow’s milk formula, remarked on how infrequently she was sick relative to other kids in her class.  Raya was born about a month before her due date, spent a few days in the NICU while her lungs shed fluid after she was born, and was immediately on antibiotics.

Part of me felt like these circumstances could be setbacks developmentally to my kid.  I think it’s the main reason I made the commitment to give her a better formula once she went off breast milk.  But it wasn’t the only reason.  As someone who has spent an inordinate amount of time learning about nutrition, I just couldn’t find a good option for baby formula.  I also wanted my wife to feel good about her transitioning to formula, so I wanted Raya’s primary food source to be really healthy.

The thing is: mass produced baby formulas just don’t seem all that healthy.  Take a look at the baby formula available on the shelf at your local grocer and you’ll find there’s a lot that’s disconcerting.  There are weird ingredients that, while they may not be bad per se, I don’t understand.  Even some of the organic, milk-based formulas still soy oil.  There’s questionable sugars (And God forbid you use a soy-based formula).  Hardly any have any probiotics in them.  And anything that is shelf-stable for months, well, it may not kill your kid but it’s very likely not ideal.

Given all this, it’s hard not to step back and reflect, “Maybe my infant child deserves healthier food—particularly since this is the only  food they’re going to be eating.”  And more to the point: the convenience of buying store-bought, ready-to-mix formula isn’t a good reason to shortchange my daughter’s health if I can help it.

So despite it meaning I had to make her formula about 3X a week, with each batch taking probably about 20 minutes and requiring weekly trips to a farmer’s market to get “Dairy Pet Aid” from a Tennessee farm that made deliveries to Georgia, I opted to make Raya’s formula.   And I’ve never seen any drawbacks from that decision.

And as we go forward, we continue to get raw milk that’s apparently only suitable for pets to drink.  Isn’t it strange that it’s legal to buy healthier food for our pets than our kids?

Moving forward

Hopefully, Ijaz’s work with the CDC in Canada will trickle down south to the United States.  If nothing else, increased awareness through publications like the WSJ should help moo-ve the needle in the right direction.

And if you’re not already plugged into a farm that can provide you with raw milk (or cheese!) or grassfed beef or pastured poultry, what are you waiting for? The only way our food supply is going to get better is if you make an effort — and it only takes a small effort — to buy better food. In our house, while we’ve taken many steps towards more organic, more local, healthier food, we still shop at Publix, Kroger, and Costco.  A step in the right direction isn’t an all or nothing proposition.

Do what you can and build on it over time (if you can!)—that’s what I do.

Why I Didn’t (And Don’t) Vote

Folks who know me know I don’t vote. Many roll their eyes at this decision. Others awkwardly skirt around it preferring to avoid asking why I don’t vote (I don’t usually go into why unless prompted). And most people just assume I do vote. Of all my friends and family and coworkers, I’m unsure how many have given a second thought to the act.

Do they simply accept the rhetoric that it’s some moral imperative to vote? That it’s a duty? That it’s a right you must exercise to preserve?

I don’t know.

And here I’m talking about incredibly smart people, which is to say (call me an elitist if you must) that the population at large probably has never given a second thought to the dogma supporting the act of voting. And I’ll just sidebar and say that a huge swath of U.S. citizens actively choose to skip the vote. Why? Are they weak in character? Is it simply not worth their time? I’m sure the reasons are many — my reasons are.

Back to my much smaller network, the reality stands: it’s so much taboo to talk about voting that it usually never gets talked about.

But I just read a great write-up that succinctly captures the main reasons I don’t vote. It’s over at Strike the Root and written by Carl Watner (and was pointed out to me by Patri Friedman via G+) and starts with a quote from Henry David Thoreau, followed by Watner’s main four points on why he chooses not to vote:

How does it become a man to behave toward this American government to-day? I answer that he cannot without disgrace be associated with it . . . . What I have to do is to see, at any rate, that I do not lend myself to the wrong which I condemn.

— On The Duty of Civil Disobedience (1849), Henry David Thoreau

[Watner on non-voting]

Truth does not depend upon a majority vote. Two plus two equals four regardless of how many people vote that it equals five.

Individuals have rights which do not depend on the outcome of elections. Majorities of voters cannot vote away the rights of a single individual or groups of individuals.

Voting is implicitly a coercive act because it lends support to a compulsory government.

Voting reinforces the legitimacy of the state because the participation of the voters makes it appear that they approve of their government.

My three year old daughter is already being taught in her daycare about the importance of voting. She admonished me (seriously) for not voting last night. I’m not overly bothered by this and I calmly told her I don’t vote because I don’t like controlling the lives of other people. I asked her if she liked being told what to do but I’m afraid it’s a bit too early for that rebuttal.

When the day comes that I can explain to a more receptive (patient!) ear, I think Watner’s list here may resurface. So it is that I’m keeping it here for future reference.

I don’t vote. I didn’t vote. Why? There are so many reasons and these are but a few.

How many reasons do you really need?

Is “Follow your passion” Good Advice? Probably not.

The problem isn’t that we’re all aimlessly trying to find our passion, it’s the misguided expectations that:

  1. we know what we want (or think will make us happy)
  2. that there’s an easy, quick way to get it (or that the road to acquiring it will be nothing but fun)

Or put it another way — how often have you known in advance, before trying a type of food you’ve never had before, whether or not you’d like it? You can’t know what you don’t know. How can you know what work you will enjoy doing before you’re doing it? How do you stumble onto that work? How do you get enough expertise to have the opportunity to do that sorta work?

Check this article: Solving Gen Y’s Passion Problem

I also can’t help but think this also applies to commitments to friends and significant others. Working hard at making a relationship function over the long haul isn’t easy and it takes commitment, a willingness to learn and grow as a person, and a number of other things. But doing it can lead to … more passion. But you gotta have realistic expectations and goals.

Let’s stop romanticizing reality. Life is not romantic.

Excerpt:

It’s this final implication that causes damage. When I studied people who love what they do for a living, I found that in most cases their passion developed slowly, often over unexpected and complicated paths. It’s rare, for example, to find someone who loves their career before they’ve become very good at it — expertise generates many different engaging traits, such as respect, impact, autonomy — and the process of becoming good can be frustrating and take years.

The early stages of a fantastic career might not feel fantastic at all, a reality that clashes with the fantasy world implied by the advice to “follow your passion” — an alternate universe where there’s a perfect job waiting for you, one that you’ll love right away once you discover it. It shouldn’t be surprising that members of Generation Y demand a lot from their working life right away and are frequently disappointed about what they experience instead.

The good news is that this explanation yields a clear solution: we need a more nuanced conversation surrounding the quest for a compelling career. We currently lack, for example, a good phrase for describing those tough first years on a job where you grind away at building up skills while being shoveled less-than-inspiring entry-level work. This tough skill-building phase can provide the foundation for a wonderful career, but in this common scenario the “follow your passion” dogma would tell you that this work is not immediately enjoyable and therefore is not your passion. We need a deeper way to discuss the value of this early period in a long working life.

Weight Gain from Forced Overeating has Limits

It seems the only things I can find time to blog on these days are posts from Peter of Hyperlipid. I’ve whittled down the number of blogs I follow that cover nutrition — just not enough time in a day — but Peter’s is fun to read if not a bit “in the weeds.” Peter can go in depth on scientific studies and the chemistry of metabolism, mitochondria, insulin, etc., but he almost always has a way of distilling that information in a way that I can not only make sense of it, but take away some insight. If you are interested in what drives obesity, eating, etc., Peter’s blog is one of the best around. And if you’re not buying the whole “Reward Hypothesis” of obesity that is being trumpeted by Stephen Guyunet (or are at least skepitcal of it — I think Guyunet is off track here; Todd Becker’s theories make a more holistic, coherent case that strings together the behavioral aspects of obesity like reward and the insulin — required reading of his here and here and here).

That’s a big digression from why I’m blogging at all, which is to highlight Peter’s latest, which looks at a study that overfed a few lean individuals by 2000 calories/day and measured their fat gain. What happened? They gained weight, but not as much as you’d predict based on a pure calories in/calories out energy balance theory obesity. Given they were overeating on highly rewarding foods like Snickers bars and chocolate milk, this would seem to fly a bit in the face of the Reward theory of obesity. What’s going on? Well, you have to read Peter, but here’s a taste:

Ah, but if insulin stores fat, why should the level of insulin fall progressively during a sustained hypercaloric eating episode? Surely you must need insulin to store those extra calories? In fact, as insulin levels fall, so does the rate of fat storage. The chaps gained, from Table 3, 1kg of fat mass in the first week and only 0.5kg of fat in the second week… Oh, I guess this must be because the subjects either (a) sneaked off to the gym in the second week or (b) flushed their Snicker Bars down the loo in the second week, without passing them through their gastro intestinal tract first (good idea!) or (c) got bored with Snickers and stopped finding them rewarding. And of course they disconnected their Actiheart monitors at the gym.

Otherwise how you can eat 2000kcal over your energy expenditure, equivalent to nearly 200g of fat gain per day, and gain a kilo of fat in the first week, then continue to eat an excess 2000kcal/d for a second week and only gain half a kilo of fat? Calories in, calories out, you know the rules. Hmmm, in the second week there are 14,000 excess calories-in, 5,000 stored, very interesting. …

[So what is going on here? …]

The mitochondria say they have too many calories. It’s easy for mitochondria to refuse calories from glucose by using insulin resistance, working at the whole cell level. In the presence of massive oral doses of glucose this must elevate insulin to maintain normoglycaemia. The elevated insulin diverts calories from dietary fat in to adipocytes, away from muscle cells. And inhibits lipolysis at the same time … So insulin goes up to maintain normal blood sugar levels, overcomes insulin resistance to run cells on a reasonable amount of glucose and shuts down FFA release to counterbalance its action in facilitating the entry of glucose in to cells.

Core to this is (a) there is no hyperglycaemia, insulin still successfully controls glucose flux and (b) insulin inhibits lipolysis. So you store fat. These subjects are both young and healthy. They do not have insulin resistant adipocytes, mitochondrial damage or a fatty liver. The system works as it should.

As time goes by fasting insulin levels fall and weight gain slows. Calorie intake doesn’t drop. The only plausible explanation is that the subjects generate more heat and radiate that heat during the second week of the study.

Important to the above study was that it was conducted on healthy, non-overweight individuals, so their metabolic systems working “as they should” would make sense — basically, their bodies ultimately start resisting weight gain despite overeating. They heat up and radiate off extra calories.

This makes a lot of sense to me based on my personal experiences overeating—anecdotally it seems to hold water (no glycogen storage = water retention pun intended I swear). Earlier this year I bumped up my caloric intake by at least 4K calories/week (maybe more). Weight gain was incredibly, surprisingly slow after the initial water weight gain as I went from a glycogen-depleted (less water retention) weight to a glycogen-replenished (more water retention weight). It was surprising to me that I didn’t gain more weight overeating, but what I noticed in the process was that I felt like I had more energy, felt warmer, more likely to be active, etc. — this is as compared to alternate day up-day, down-day eating (with a net deficit or close to it) a la LeanGains.com. I literally felt my body burning off at least some of the caloric excess.

Meanwhile, I paid particular attention to the fact that dietary fat providing calories in excess of my needs would tend to be stored (and made the assumption that excess carbohydrates aren’t easy to turn into fat) so I tended to overeat on carbs and not fat. I think that kept fat gain in check, too.

I’ll wrap with a thought about bodybuilders and a popular diet they use to gain weight — it’s GOMAD, which stands for “Gallon of milk a day.” Nearest I understand it is that you should basically eat your regular diet and then drink a gallon of milk (I believe whole milk is prescribed). The milk provides something like 2,400 of excess calories per day. That’s nearly 17K a week, or if that turned only into fat, it’d be a bit less than 5 lbs./week. I don’t know if this is true or not, but I doubt bodybuilders doing GOMAD actually gain 5 lbs./week. But why even bother with such a huge caloric excess in the first place? Probably because bodybuilders have found — via trial and error and a lot of self-experimentation — that in order to get mass gains (muscle, not just fat), they have to massively overeat to force the body to grow.

Seems our bodies are homeostatic systems and like to keep the status quo. Go figure. Ok, that was probably fascinating to 1% of you and likely .0000001% of the Internet, but I found it fascinating.

Thanks, Peter!

Compensating for Broken Fat Cells

When it comes to reading about the metabolic effects of eating a high fat diet (With low fat and low carbohydrate, in turn), I turn to Peter’s wonderful Hyperlipid. I was catching up on Reader the other day when I saw this post about broken mice. It’s a bit esoteric so be warned, but there’s an idea therein that I find particularly interesting — it pertains to mice with broken metabolisms.

The result is the following:

They develop neuronally mediated acute insulin hypersensitivity in their adipocytes, they then abnormally store fat at low levels of insulin, increase eating to compensate for this calorie loss in to adipocytes and eventually develop adipocyte distention induced insulin resistance, which shows as metabolic syndrome.

If I might try to distill the above, what I take from it is that these mice have a totally screwed up insulin response in their fat cells, causing them to snag up whatever dietary fat is present (taking it out of the pool of energy available to the body). If you’re eating a decent amount of carbs or protein (both increase circulating insulin), the net effect will be overeating as your fat cells soak up any accompanying dietary fat (and perhaps a bit of converted fat), effectively “starving” your other cells (you get hungry as a result). This works until it breaks (the fat cells get too big and can’t take on more nutrients). Once broken, the body can’t deal with excess energy and starts failing (metabolic syndrome).

This all reminds me of Gary Taubes’ analogy to the plugged bathtub filling up with water until the water pressure gets high enough in the tub to push through the clogged drain; and when/if this stops working, the water goes over the sides of the tub (metabolic syndrome). The tub is your fat. Something like that.

Gamers Solve Problem that Dogged Researchers for Decades

“Online gamers have achieved a feat beyond the realm of ‘Second Life’ or ‘Dungeons and Dragons’: they have deciphered the structure of an enzyme of an AIDS-like virus that had thwarted scientists for a decade.”

Awesome. The takeaway I see in this approach is evocative of self-experimentation. Gamers are basically lots of little experimenters. Perhaps gamers succeeding where scientists had failed for a decade is less about playing a (purposeful) game for fun and more about simply having enough people iterating on the problem with a basic incentive to succeed (it’s fun).

Read more: http://www.smh.com.au/digital-life/games/online-gamers-crack-aids-enzyme-puzzle-20110919-1kgq2.html#ixzz1YPrBQ3yL

Online gamers crack AIDS enzyme puzzle

(snippet)

Figuring out the structure of proteins is vital for understanding the causes of many diseases and developing drugs to block them.
But a microscope gives only a flat image of what to the outsider looks like a plate of one-dimensional scrunched-up spaghetti. Pharmacologists, though, need a 3D picture that “unfolds” the molecule and rotates it in order to reveal potential targets for drugs.
This is where Foldit comes in.
Developed in 2008 by the University of Washington, it is a fun-for-purpose video game in which gamers, divided into competing groups, compete to unfold chains of amino acids – the building blocks of proteins – using a set of online tools.
To the astonishment of the scientists, the gamers produced an accurate model of the enzyme in just three weeks.
Cracking the enzyme “provides new insights for the design of antiretroviral drugs”, says the study, referring to the medication to keep people with the human immunodeficiency virus (HIV) alive.
It is believed to be the first time that gamers have resolved a long-standing scientific problem.
“We wanted to see if human intuition could succeed where automated methods had failed,” Firas Khatib of the university’s biochemistry lab said in a press release.
“The ingenuity of game players is a formidable force that, if properly directed, can be used to solve a wide range of scientific problems.”
One of Foldit’s creators, Seth Cooper, explained why gamers had succeeded where computers had failed.
“People have spatial reasoning skills, something computers are not yet good at,” he said.
“Games provide a framework for bringing together the strengths of computers and humans. The results in this week’s paper show that gaming, science and computation can be combined to make advances that were not possible before.”

Doctors make bad business people

http://blog.theentreprene…e-1#comment-464

Posting here a comment I left over at the Entrepreneur School Blog, where Jim Beach talks about the lack of business sense in doctors. If it sounds rant-ish, I apologize.

There are good reasons doctors make for bad business people (and entrepreneurs). The healthcare system is set up where the customer is an insurance company (1) and (2) practicing medicine has turned into a matter of treating symptoms rather than proactive care. In other words, the doctors expect *you* to call them when you have a problem because on the front-end (the preventative end) they largely have nothing to offer.

Case in point: medical students only take maybe a semester’s worth of nutrition. Insane when you consider that diet (and lifestyle) choices are certainly the most likely causes of people’s health maladies — so really, when it comes to having the proper tools to help patients avoid getting ill in the first place, doctor’s aren’t equipped. They are proactively/preventatively neutered.

I won’t get into the insurance company angle, but that has an impact, too. The customer has been utterly divorced from the healthcare system when you’ve got so many layers between you and your care:

you -> employer’s HR department -> [the insurance company bureaucracy] -> [hospital bureaucracy that negotiates reimbursement amounts] -> doctors who code procedures and provide you care

Simply cutting out the employer side would immediately get the consumer more plugged in as you’d have 10s of thousands of more individuals interested in getting better rates/plans from insurers than the lackadaisical corporate HR departments. I digress.

And there’s a (3) here, too. That’s that the doctor’s go to school for longer than any other profession, which makes them prone to dogmatism/procedure instead of being creative or driven to trial/error and problem-solving.

All things considered, it’s a disaster of a system. It’s no surprise it’s only getting worse. The only surprise (to me) is that so few people are talking about these core, simple problems that are making for such a disastrous healthcare system.

Collapse and Renewal

http://www.peopleandplace…se_and_renewal#

Found a lot of interesting things to think about in this article. Seems to get at how dynamic/complex/living systems (ecosystems, economies) evolve rapidly and then stagnate, which leads to collapse, and reboots the system (my simplification).

What I Have Learned
Change that is important is not gradual but is sudden and transformative. There is a common base cycle of change in individuals, in ecosystems, in business, in society. Increasing rigidity halts a long, slow period of growth and increasing efficiency. That begins a period of creative destruction and a fast period where uncertainty is great, where novelty emerges, and where new foundations are formed for a new cycle to begin. That is where we are now heading internationally.

In the United States, it is a time when the power of the state has achieved rigidity unseen since the triumphs of the falling of the Berlin Wall. Politicians have reacted to extreme disturbances, like the appalling terrorist attacks of 9/11, with powerful military response, a blind view of history and cultures, and a greedy desire for narrow benefit. Global economic expansion and dependence on peaking oil supplies, particularly in the Middle East, lock geopolitics into a self-destructive state from which transformation is extraordinarily difficult.

That is the time when change is most uncertain. We are living in it now. In this year we have simultaneously faced the sudden appearance of now reinforcing flips – sudden increases in the price of oil, increases in the costs of food, a financial collapse and the start of a recession, the retreat of Arctic ice sheets with climate warming, and accelerating loss of biodiversity. That is a lot to swallow and it reflects a process of human development and expansion since WWII.

But it is also the time when the individual has the greatest influence: when experiments determine the future; when the Internet opens opportunities for collaboration within and across nations; and when low cost mistakes are glorious because they trigger learning.

And these are the lessons I have learned that help in that process of dealing with turbulence:

1) Separate individual thought and work is essential but now, when integrative studies are the only way to reveal understanding, work with others is equally so. An individual’s knowledge can be combined with that of others to make the whole greater. In doing that we each recognize that we do not know everything but we do know, and know well, something. We learn with grace and humor and patience to work with others from different disciplines and backgrounds.

2) Complexity is in the mind of the beholder, in the patterns that are generated by causes that are simpler. Not as simple as once thought, but explained by a kind of “Rule of Hand”, not by a “Rule of Thumb’. Quite simply, I found in case after case of ecosystem change that four to six sets of variables operating at a number of different scales, in a non-linear way, captured nature’s flipping behavior. It turns out that ecosystems are temporary assemblages, pausing for a few hundreds of centuries in a passing state of quasi-stability as part of evolutionary change. Think of that when we think of the reality of global climate change.

3) There are about three kinds of scientists – the consolidator, the technical expert, and the artist. Consolidators accumulate and solidify advances and are deeply skeptical of ill formed and initial, hesitant steps. That can have a great value at stages in a scientific cycle when rigorous efforts to establish the strength and value of an idea is central. Technical experts assess the methods of investigation. Both assume they search for the certainty of understanding.

In contrast, I love the initial hesitant steps of the “artist scientist” and like to see clusters of them. That is the kind of thing needed at the beginning of a cycle of scientific enquiry or even just before that. Such nascent, partially stumbling ideas, are the largely hidden source for the engine that eventually generates change in science. I love the nascent ideas, the sudden explosion of a new idea, the connections of the new idea with others. I love the development and testing of the idea till it gets to the point it is convincing, or is rejected. That needs persistence to the level of stubbornness and I eagerly invest in that persistence.

All types of scientists are necessary, but I would love it if we could encourage and include the innovative type of artist. At the least, enjoy rigor, but never inhibit the innovative artists.

4) I learned that the key to make effective designs was to identify large, unattainable goals that can be approached, but not achieved, ones that relate to fundamental values of free speech, freedom, equity, tolerance and education. And then to add a tough design for the first step, in a way that highlights or creates options to design, later, a second step – and then a third and so on. We found that the results were steps that rapidly covered more ground than could ever be designed at the start. At the heart, that is adaptive design, where the unknown is great, learning is continual and actions evolve.

5) I am prodigiously curious about nature, and that triggers initial ideas. I am also terribly persistent and stubborn about developing and testing an idea that grabs me; at those times I am totally and narrowly focused, driven by the potential. That is what eventually makes an idea useful. So I conclude that natures create the idea; stubbornness makes it useful! But I have had to learn how to see nature. It is curiosity, anecdotes, funny correlations, jokes and metaphors that start that. It is new emerging theory that completes it.

One has to learn to develop senses that help us listen to intriguing voices that are hidden amongst the noise. Owlish ways to hear the rustle of the mouse. Do that and the future will be fun and rewarding. We all might even help, at this time of great change and threat, to develop further a world of justice, understanding and equity.