Sunday, January 26, 2020

No BF to Existentialists


As mentioned at various times in these blogs, in order to keep my home library of old-fashioned paper-and-ink books from exceeding my shelf capacity (“add more bookshelves” is no longer a desirable option), my rule-of-thumb is to keep a book only if in principle I might re-read it. Newly finished books that I never willingly would read again even if I had boundless time are not shelved at all. As I acquire new “keepers,” marginal titles on the shelves are culled out so the total shelved number (some 2500) remains about the same. In truth, most of the remaining books will not be re-read either simply because of limited time. However, if I’ve culled properly, any one of them ought to be re-readable if plucked out at random. I test the matter with some frequency by making just such random plucks, usually about one per week.

One recent out-pluck had been sitting on my shelf un-reread since 1972 when I was in college: Beyond Freedom and Dignity by BF Skinner. Back then it inspired a 20-page paper (see pic of cover page) that I wrote for an English class. The class was specifically for honing writing skills (i.e. it wasn’t a literature or general grammar class), and one of the assignments was a +-20 page research paper; the topic didn’t matter, since it was to be judged on form and presentation rather than content per se. The paper (The Conversion of a Reluctant Behaviorist) was an overview of Behaviorism that mostly cited Skinner’s formal studies but also referred to Beyond Freedom and Dignity. I did not know until afterward when she handed me back the graded paper and discussed it with me that the professor had been a student of BF Skinner. (What were the odds on that?) Perhaps that helped on the grade despite the supposed “content doesn’t matter” standard. It probably would not have helped had I mentioned I was lying… sort of. It simplified my task (hey, I had work to do in other classes) to explain straightforwardly how I found the tenets of Behaviorism to be convincing despite my initial misgivings. Adding a “yes, but” detailing my remaining misgivings required more nuance, more research, and more plain old work than I really wanted to put into this paper. Yet I had reservations then and still do.


My 2020 re-read hasn’t changed my opinion much. I was baffled by the book then, and I’m still at something of a loss today. I’ll say up front that I have a lot of respect for Skinner, the research scientist. He has more than proved his case that the Behaviorist school of psychology has a lot of merit. Though best known for his animal studies (e.g. the classic Superstition in the Pigeon), he argued the results are readily applicable to humans. In many ways they prove to be so. (Apparent failures in the technique on any one human are attributed to a lack of full information on that person’s reinforcement schedules outside the lab.) He turns the usual approach to psychology on its head (pun intended) by not tending first to the mind. Change the behavior via the proper reinforcement schedule, he says, and let the psyche take care of itself. If we like a behavioral change, our general mental state is likely to improve too. The approach is not without successes.

However, Beyond Freedom and Dignity is not about treating individuals. It is about treating society, and so it is political philosophy, not “science” despite the frequency with which he uses the word to dismiss anyone who disagrees with him as unscientific. Skinner is a strict determinist who doesn’t believe in free will.

I think I need to sketch out one personal view, which informs my response to this book: in my opinion the whole discussion of determinism and free will is academic. It’s a bit like discussions of whether time is real or if it is just an illusion created by the perception of entropy: as a practical matter, tomorrow will arrive for us whether the passage of time is “real” or not, so we’d better be ready for it and we’d better pay our bills before the end of the month. As for free will, as a practical matter we have it, whatever the ultimate underlying cosmic reality might be. We have to hold people accountable for their choices, which means we have to assume people make them. We can’t ignore criminal behavior, for example, on the grounds that the criminal had no choice or culpability because the crime was already built into the structure of the universe – were that true, Jeffrey Epstein, for one, ought never have been arrested. I am not about to surrender my freedom of choice because a determinist says I don’t have any. As a practical day-to-day matter I do. (Notice that “to surrender” also would be a choice.) It is notoriously hard to define consciousness – the meta-state of not only knowing but knowing that one knows – but it is safe to say that human minds are more complex than those of pigeons. We can consciously choose to alter our behavior even if the reinforcements remain the same. It’s not always easy, as any addict will tell you, but we can do it.

Skinner argues that since there is no such thing as freedom or autonomous beings, we should chuck the whole idea of personal liberty out the window and organize society on scientific principles (aka his principles) with a structure of reinforcements that would maximize human happiness. I’m not quite sure how we could choose to do that, since by his own argument we don’t really choose anything – what we do or don’t do is already predetermined. And whose definition of happiness?

Once again, I respect the work by Skinner that actually is scientific, but I can’t help thinking that in this book he has gone seriously wrong somehow. Skinner personally might have been a kind-hearted soul who genuinely wished for human happiness, but it’s not hard to see how easily his philosophy can be coopted by less kindly authoritarians. Besides, kindly authoritarians are often the most dangerous of all.

Will Beyond Freedom and Dignity go back on the shelf? Probably. I keep a lot of books with which I disagree. I’ll let it sit for a while longer on my desk, though, while I consider it.


BF on pigeons and people 


Thursday, January 23, 2020

On Being One’s Own Chauffeur


If you’re over 50, every day is the 50th anniversary of something personal, but of course some of those days are more memorable than others. For no obvious reason, it occurred to me while on the road this morning that I got my driver’s license 50 Januaries ago. US states vary in the minimum age for a license, being as low as 14 in South Dakota to as high as 17 in my own state of New Jersey. (There currently is a “probationary license” for 17-year-olds in NJ with various time and passenger restrictions, but in 1970 a license was a license was a license.) My 17th birthday was at the end of November in 1969, but because of the holidays I wasn’t able to schedule the Driver’s Test before January 1970. I passed. So, one morning during this month 50 years ago instead of getting on the school bus or cadging a ride from mom, I slipped behind the wheel of my dad’s car (with his permission) and drove off alone for the first time. Except for the destination (school), it was a liberating experience. It was another five years before the open road tempted me to follow it to the Pacific and back, but that morning’s seven miles to school were a first taste of mobile freedom. There are other personally notable 50th anniversaries coming up this year– high school graduation, for instance – but as a rite of passage getting a driver’s license in many ways mattered to me more.

1970 Jeepster 
My mom had sold her 1967 GTO (400 cu.in. 360 hp) a couple of months earlier. I’m sure there was a connection between that decision and my upcoming license. In its place she bought a 1970 Pontiac Grand Prix, an enormous coupe with a hood large enough to subdivide into plots for single family houses. It wasn’t a vehicle I ever would ask to drive. Sometimes my dad would let me borrow his 1968 Mercedes Benz 230, which he had bought two years earlier; with only 8000 miles on it; it was a bargain sale by a musician who had just lost an argument with the IRS. Most commonly, though, for the next three years I drove the 1970 Jeepster on which (for the most part) I had learned to drive. The Jeepster was an excellent vehicle for a newbie precisely because it was difficult: a 4-speed stick shift V6 with no power steering, no power brakes, and a clutch with scarcely any slippage so the slightest error in foot pressure would stall the engine. The ignition was twitchy, too, so I habitually parked it on an incline so I could let it roll forward, turn the key, and engage second gear: it started every time that way. Every vehicle I’ve driven since has seemed easy.

The peculiar geography of New Jersey also makes it ideal (which is to say challenging) for new drivers. I know people who grew up in cities and who consequently find dark winding country roads scary. Conversely, I know drivers from the farmlands who are overwhelmed by urban traffic. NJ is a weird mix of urban, suburban, and rural: often in the same short trip. Perhaps something about that multiple experience contributes to a characteristic driving style responsible for NJ drivers being widely regarded as second only to Massachusetts drivers as people you don’t want on the same road as you. Oddly, despite their reputations for rude aggressive driving, your chances (according to the Centers for Disease Control and Prevention) of dying in an automobile (1.3% in the USA overall over a lifetime, 10.7 per 100,000 in any one year) are lowest in Massachusetts (5.6 per 100,000) and second lowest in New Jersey (6.3 per 100,000). I guess we’re more likely to cause accidents than have them. Montana, of all places, has the highest fatality risk at a whopping 23.3 per 100,000, though Montanan drivers do score high (3rd place) on the politeness scale.

My dad had been in the passenger seat in the previous months in 1969 when I was learning to drive – other than during formal Drivers Ed classes at school. I don’t recall anyone else ever being there. I’m sure that took steely nerves on his part as I lurched, stalled, over-braked, and oversteered in a vehicle that could be easily rolled due to a short wheelbase. I had my own experience attempting to impart lessons from a passenger seat several years later (see The Driving Lesson) and the results were not good, so I appreciate his daring. My dad managed to contain all but a few expressions of alarm during all that time.

Millennials and iGens (aka GenZ) are a puzzle to many of my generation and of GenX in regard to licenses, as in regard to so many other things. In large numbers they, of their own accord, are delaying or forgoing licenses. Only 77% of 20-24 year-olds currently have driver’s licenses. In 1983 92% did. Among teens the drop-off is steeper yet. Most states issue licenses at age 16, but only 25% of US 16-year-olds have them. 46% did in 1983. I don’t pretend to understand this. Apparently they don’t mind being chauffeured by mom and dad, an idea that was anathema (even when unavoidable) to my generation. Parental chauffeurs were a deal-killer when dating, for one thing, but then dating is also as old-fashioned as Blockbuster. Teens today more commonly hang out in groups rather than pair off for a burger (non-vegan back then) and a movie. Parents don’t seem to mind the extended chauffeur duty either.

Driver’s licenses in the US are almost as old as automobiles, New York in 1901 being the first state to issue them. There was no exam of any kind: just a fee. Not until the 1950s did the majority of states require driving tests. (All have since 1959.) The license was something that could be revoked, however, so it did serve a law-enforcement purpose. For the first 15 years of the 20th century, two types of passenger car drivers dominated the roads, such as they were. There were the early-adopter auto enthusiasts, who were likely to have a mechanical bent. Then there were chauffeurs, who doubled as mechanics. (A great period depiction of chauffeurs is in GB Shaw’s 1905 Man and Superman.) The reason was that the vehicles were unreliable. If you didn’t have the skills to repair your car yourself when it stopped for some reason (as it very likely would), it was best to have someone along who did. Chauffeurs were expensive (as they still are when they’re not your parents), but wealthy people overwhelmingly were the customers for cars anyway. ‘Chauffeur’ originally meant ‘stoker,’ so the word contains the notion of someone who keeps the engine running; this meaning was lost as reliability improved and chauffeurs became simply drivers.

Automobile reliability more than affordability (though both were important) was key to letting ordinary people be their own drivers; they could drive to town and back alone without a serious risk of being stranded on the way. Historically, this level of reliability and affordability had been achieved by 1920 and it transformed American life. For me personally, the year I became my own chauffeur was 1970, and, strange as it may seem to iGens happy to be passengers, it was a joyful moment.


Maria Muldaur - Me and My Chauffeur Blues

Sunday, January 19, 2020

The Check Is in the Mail


My checkbooks are presently on my desk next to the computer (it’s an L desk) and the latest bills next to them are waiting to be opened. I usually pay them on weekends: most commonly on Sunday. (It’s not an unbending rule: sometimes I go wild and pay them on Tuesday.) Yes, I pay all but a few of them the old-fashioned way, not online.

Checks preceded money, as such. Standard systems of weight for silver and gold (e.g. the 17 gram Babylonian shekel) were devised very early, which facilitated their use as standards of value, but the precious metals themselves came in any shape and size – hence the need to weigh them. Gold rings were commonplace for easy carriage. So, they were money only in a broad sense. The Lydians are credited with minting the first standard precious metal coins in the 8th century BCE and the Chinese with printing the first fiat folding currency (originally leather, later paper) in the 2nd century BCE; those are “money” by even the strictest definition. In ancient Sumer thousands of years before any of that, however, individual traders exchanged clay tokens notched to indicate quantities of sheep, grain, chairs, and other valuables. Those are personal checks. In ancient Rome shipping companies doubled as banks and issued checks cashable at ports-of-call so shippers and passengers didn’t have to carry coins. Modern customer checking accounts operated by banks issuing printed checks with serial numbers (to “check” on them) appeared in 18th century England, as did central clearinghouses for banks. By the 19th century they were a commonplace means for ordinary folks to settle debts large and small. The largest single check ever written by a private entity, by the way, was $9,000,000,000 from the Bank of Tokyo-Mitsubishi UFJ Ltd to Morgan Stanley in 2008. The largest personal check was $974,790,317.77, a divorce settlement from oilman Harald Hamm to his ex in 2015.
Abe Lincoln cashed an $800 check the
day before he was shot.

Once again, as indicated above, I still use checks (paper not clay, though in principle a clay check arguably still should be valid) for most financial transactions, including paying recurring monthly bills. Though electric power companies, credit card companies, phone companies, and…well… pretty much every commercial enterprise offers (nags, actually) to take its recurring payments electronically directly from my account “for your convenience,” I resist allowing that whenever possible. Sometimes it isn’t, but all my important bills are paid by paper check. I realize that is showing my age. I have millennial acquaintances with no checkbooks at all; even when they receive a check, they snap a photo of it with an iPhone and deposit it electronically. I still prefer a hands-on approach, which has the added benefit of simplifying accounting. My deductible expenses are all right there in ink in the checkbook register. There is also the secondary benefit of focusing my attention on my expenditures.

The first checking account in my own name was in 1969 at 16. It was useful for depositing checks from a summer job, and I knew I would need an account when leaving for college the following year anyway. (I no longer recall if the bank rep, a local fellow whom I knew personally, asked for any parental signatures; perhaps the regulations required an adult signature to open an account, but while I remember sitting at his desk I don’t remember being accompanied.) Prior to then I operated entirely by cash, which was normal in the day. Teens didn’t carry credit cards back then; a substantial minority of adults didn’t either. A few banks already were experimenting with debit cards and ATMs, but most Americans had never heard of them, much less seen one. No teen had access to one. The bank where my account opened was gobbled up four or five times after 1969, but I still have it in its successor – something about which I hadn’t really thought until this moment.

I did appreciate the handiness of credit cards right away, though I didn’t qualify for a major one until 1975. By then they were all but essential when traveling. Try renting a hotel room without one. I pay them off by check, however, much as that seems to annoy the issuers, judging by their constant wheedles to switch to electronic payments.

Governments – tax authorities in particular – are fond of the shift to electronic payments (cryptocurrencies excepted): not just to them but in general. It is so much easier for them to track income flows that way. Their computers can monitor all our transactions and red flag any anomalies. Given their druthers, most governments would stop printing money altogether in favor of going all-electronic; they don’t lest truly untraceable private currencies pick up the slack. Gold retains its appeal for many for just this under-the-radar characteristic. Cryptocurrencies appeal for the same reason. I actually seriously considered mining or buying some Bitcoins a decade ago, but hesitated because I didn’t really understand the blockchain record-keeping that is the basis of their value. By the time I read enough about it to grasp it, the profits had been made. The first known purchase by Bitcoin was 10,000 Bitcoins for a $25 pizza in 2010, which established a value of US $0.0025 per coin. Today a single Bitcoin trades at $9,102. So, a pizza’s worth of coins acquired in 2010 would be worth $91,020,000 today. I missed out on that one, but at least I didn’t buy gold. Gold’s price has barely budged since 2010; even the measly interest offered by banks over that decade provided a better return.

Well, those bills on the desk won’t be paid in gold, Bitcoin, or electrons. The paper checks will be in the mail. Hey, at least they’re not clay.


John Lee Hooker - I Need Some Money

Thursday, January 16, 2020

Gaining Traction


As I’ve mentioned in the past, Peter Jackson is a filmmaker whose work I admire more than like. A big exception is the World War One documentary They Shall Not Grow Old, a stunning film that I both like and admire, though the theater in which I saw it was almost empty. But even though films such as Lord of the Rings and The Hobbit are not for me, I recognize them as remarkable moviemaking. For that reason, I skipped Mortal Engines in the theater and wasn’t particularly eager to see it on DVD, but didn’t fear hating it either. This instinct proved sound. Jackson’s own involvement in this film was peripheral, but his usual team of fx engineers were at the core of its production, so calling it a Peter Jackson movie is not entirely unfair. The movie is based on the Young Adult novel series by Philip Reeve.

In a dystopian world long after the “sixty minute war” destroyed the bulk of civilization, cities on giant caterpillar tractor treads wander around the wastelands consuming surviving smaller towns for fuel and resources: a system known as Municipal Darwinism. London is a big player in this system. Traditional static villages of the Anti-Traction League exist beyond a great wall in Asia, however, and they support subversive Anti-Tractionists in the Western mobile cities: a well-worn rapacious-West vs spiritual-East trope. London bigwig Valentine (Hugo Weaving) has a secret project in St Paul’s Cathedral that may allow the city to take on the wall. Opposing him are the Anti-Tractionist Hester (who has a personal as well as political grudge), the hapless Tom (Robert Sheehan) who follows Hester like a puppy-dog, and Anna Fang (Jihae) whose warrior chops are oddly impersonal. There is also a reanimated killer cyborg named Shrike, who is the only one in the movie to show any depth or character development.

Even a fantasy film benefits from exposing the human heart and human values. Little of either beyond the shallowest is on display here. Nonetheless, the movie has glitzy fx, nicely done action sequences, protagonists that are (though not engaging) not dislikable, and a more or less coherent plot – and it just looks good, which counts for something.

The Upshot: entertaining enough to watch once – twice, not so much. Thumbs ever so slightly tilted above the horizontal.

**** ****

It seemed appropriate to follow up the movie with a book about the sort of weapon likely to be used in a “sixty-minute war.”

Revisiting South Africa's Nuclear Weapons Program: Its History, Dismantlement, and Lessons for Today by David H Albright and Andrea Stricker is a heavily documented look at the nuclear weapons program of the only country ever to develop nuclear weapons and then give them up. South Africa built 8 active devices between 1979 and 1989, the year the decision was made to dismantle them. South Africa signed the NPT (Non-Proliferation Treaty) in 1991, which was followed by an inspection regime by the IAEA that was highly intrusive and not specifically required by the NPT; amid the contemporaneous transition from apartheid to a broadly representative government, it was one to which Pretoria (grudgingly) nonetheless acceded in order to help rebuild the country’s international standing. Accordingly, there are detailed records of nuclear-related facilities (right down to storage sheds), of the production and disposition of enriched uranium, and of the dismantling procedures.

As the book explains, the biggest obstacle to building a fission nuclear weapon is not engineering the device but obtaining the fissile material to put in it. There are two practical options, both of which require a sophisticated industrial capacity: U235 or Pu239. Weapons grade for either is usually regarded as 90% pure, though as low as 80% U235 can work at a reduced yield. The advantage of Pu239 is that plutonium can be extracted from spent fuel rods of nuclear reactors, which is why the NPT requires strict accounting of spent fuel by signatory states. Uranium deposits occur naturally, but natural uranium is more than 99.274% U238, which is not bomb material; separating out the 0.72% U235 (other isotopes make up the difference) from natural uranium is a complex and laborious process. The advantage of uranium, however, is that the weapon itself can be much simpler, e.g. a gun device that shoots one chunk of U235 into another to create a critical mass; plutonium requires a more complicated implosion mechanism to create a critical density. South Africa took the uranium route.

The book reveals how a relatively minor power with stiff sanctions against it was still able to home-grow its own nuclear program. It reveals why: in this case because the South African regime felt its existence was threatened by Soviet-backed communist forces in Angola and Mozambique. It also reveals how it is possible to undo a decision to go nuclear. Once again, there is a reason why: in this case the end of the Cold War and of the Marxist threat. All four points are definitely relevant with regard to current and aspirational nuclear states.
  
As a history, the book is more informational than engrossing, but it is enough of the former for a Thumbs Up.


Trailer Mortal Engines

Sunday, January 12, 2020

To Coin a Phrase


I was talking with a friend the other day and noticed we were both conversing in clichés more than usual. We cut to the chase and left no stone unturned while things came down the pike when they weren’t coming out of the woodwork. We all use clichés when speaking, since it is hard to be extemporaneously original and eloquent at all times, but on those occasions when they dominate our speech, it is likely that we don’t have much of substance to say. This was indeed the case the other day: nothing much new had happened (about which either of us wanted to talk anyway) since the last time we’d met, so we filled the air with familiar sounds before going our separate ways. Speech is one thing, but excessive use of cliché when writing is simply lazy – a vice not altogether alien to me. Well, there is no point crying over spilled milk.

Clichés often linger past the time when their literal meaning is understood. For example, “by and large,” a phrase I employed twice in the same blog three posts ago, is a sailing term meaning “against and with the wind,” which I’d bet is news to most people who use it in place of "generally." Clichés often are clever – or at least so they seem the first 1000 times we hear them. Unsurprisingly, then, Shakespeare is the origin of more than a few: wild goose chase (Romeo and Juliet); be-all, end-all (Macbeth); dead as a doornail (Henry VI, Part II); heart of gold (Henry V); break the ice (The Taming of the Shrew); good riddance (Troilus and Cressida); cruel to be kind (Hamlet); seen better days (As You Like It); and, well, you get the idea. Yet, not every familiar saying is a cliché. The word “cliché” originally referred to block printing, which gives us clue about how a saying transitions to one: overuse. When a phrase or word combination has become so trite that it is less effective than the simple word it replaces, it is probably best to use the simple word. So, at this juncture (which is a cliché), it is probably better to describe someone as reacting “impulsively” rather than “at the drop of a hat.” To be sure, “overuse” is (here comes a cliché) in the eye of the beholder but most of us will say (yes, here comes another) “I know it when I see it.”

None of this really matters except to writers who wish their prose to be more fresh than stale. Nowadays, however, a great many folks self-publish, whether in print or online (including blogs such as this one) with little or no editorial review. A second or third thought given to style, along with the basics of spelling and grammar, can do no harm. They (we) might find helpful It’s Been Said Before: A Guide to the Use and Abuse of Clichés by lexicographer Orin Hargraves, past president of the Dictionary Society of North America. The book is pretty much what the title says it is. Hargraves categorizes the different types of clichés and explains how they differ from idioms. He discusses hundreds of examples, with remarks on how each one may be best used or avoided and in what context. He readily admits that any such discussion is subjective, but his solid credentials make his opinions worth noting.

Hargraves doesn’t eschew all clichés even in semiformal prose. There are times when they are apt and when they can convey an idea to a reader quickly in an easily understandable way. He urges authors to write mindfully, however, and to consider if a shopworn phrase (yes, “a shopworn phrase” is a cliché) is the best way to convey one’s meaning. He quotes George Orwell, a very mindful writer, regarding clichés:

“They will construct your sentences for you – even think your thoughts for you to a certain extent – and at need they will perform the important service of partially concealing your meaning even from yourself.”

So, rather than immediately resorting to cliché when at the keyboard burning the midnight oil, bite the bullet, pull out all the bells and whistles, wear many hats, take the cake, and then breathe a sigh of relief.


Biff Rose – Ballad of Cliches (1969)


Tuesday, January 7, 2020

Over Easy


Between 1970 and 2010 I seldom ate breakfast. I had no hard-and-fast rule about it. Breakfast sometimes happened: no one ever was shocked to hear I had pancakes earlier in the day, but on a typical day I didn’t. On any given morning I just usually preferred to spend that time snoozing a bit longer and saving up the calories for lunch, the customary foods for which I liked better than the customary foods for breakfast anyway. I never believed the “most important meal of the day” adage, and indeed recent studies have cast serious doubt on it. Breakfasting in those years was more of an occasional indulgence. My meal habits began to change along with my lifestyle in general about a decade ago for a variety of unplanned reasons that I won’t list here. Nowadays breakfast gets the nod three times a week or so. It’s still not quite a majority of days, but it’s teetering on the edge.

Many authors who write in English on the history of cuisine repeat that breakfast was not really a thing until Tudor times. They allow that farmers and simple laborers ate before work, but say the upper crust generally did not. This is misleading. For one thing, farmers and simple laborers were by far the majority of the population in England as in the rest of the world. For another, the upper crust ate whenever they wanted: many of them in the morning. It is true that the word “breakfast” appears in English in Tudor times, but the idea was nothing new. The rise of wage employment in this era did tend to regularize working hours and, accordingly, mealtimes, but munching in morning, midday, and evening predated England. Ancient Romans, for example, followed the familiar three meal pattern (sometimes adding an afternoon snack); jentaculum, the morning meal, most definitely translates as “breakfast.” (Replace the “j” with an “i” if you prefer classical spelling.) The first century poet Martial mentions bakers selling boys pastries for breakfast (jam vendit pueris jentaculus pistor), so we know Romans patronized the equivalent of donut shops in the morning. Other sources mention eggs, milk, mulsum (wine and honey), and a pottage of grains, vegetables, and meats all cooked together in a stew pot. Naturally, wealthier folk had more choices, and ate much the same as they did at other meals.

Yesterday morning (I am writing soon after midnight, so I mean some 15 hours ago), I stopped by one of my usual a.m. haunts and perused the specials on the board. The country fried steak is great at this spot, but it is a hefty meal to put it mildly; it’s best reserved for days when hungrier than I was. The chili jalapeno omelet called to me, too, but in the end I opted for something else: eggs over easy on prime rib hash with hash browns on the side. While this was a satisfying selection, it did raise the question of why some foods are customary for breakfast and not others – hamburgers for instance. Ultimately, the answer is circular: items are offered on the breakfast menu because we expect them to be there. There is a history to those expectations, but much of it is not as deep as one might imagine.

The deepest involves the cereal porridges: oatmeal, barley, wheat, maize (in the New World), and rice (especially in Asia). They are prehistoric, having become available with the advent of Neolithic farming. (The same meal likely would be on the table later, too.) Stews and pottages (not unlike Roman pottage) also appeared early and were common world-wide up through the 18th century. Wealthier folk had access to more varied fare. Breakfast was very much a thing in colonial America and the early Republic. One aspect of breakfast on this continent right up through the middle of the 19th century that seems odd to us today is the ubiquity of alcohol. Ale, wine, or cider were a normal part of the meal. John Adams accompanied breakfast every day with a tankard of hard cider, after which he commonly swam in the Potomac; the only thing odd about that routine in his day was the swim. As wealth increased and diets improved through the 19th century, breakfast became bigger and its fare more distinct from other meals. Victorian fare included ham, eggs, puddings, venison pies, fritters and more. The temperance movement and cleaner sources of water reduced alcohol consumption in the a.m.

In response to the Victorian trend to heavy breakfasts, Kellogg, Graham, and Post developed their light crispy (vegetarian) breakfasts as health food at the turn of the century. The prudish Mr. Graham specifically entertained the hope that his crackers also would reduce sex drive, mention of which aspiration wisely was left out of the marketing campaign. (They don’t.) Americans took to cornflakes and other lighter breakfasts. The invention of the electric waffle iron in 1911 further enhanced grain-based breakfasts. This changed again in the 1920s thanks to Edward Bernays, Sigmund Freud’s nephew and author of the how-to (and why-to) book Propaganda (1928). Faced with a surplus of bacon, the Beech-Nut Packing Company hired Bernays, who found 5000 doctors to say that the high-protein farmer’s diet was right all along. His advertising campaign cited this study (“doctors say…”). Bacon and sausage sales took off along with eggs that the meats so tastily accompany. By the end of the 1920s breakfast menus were pretty much what they still are today. (Bernays also marketed cigarettes to women in the ‘20s by associating cigarettes with suffragists, but that is another story.)

All that is making me hungry again. I’ll probably go out for breakfast at least once more this week. Yet, today (meaning in about 12 hours) I think I’ll skip it and go out for lunch instead.

Ray Davies – Is There Life after Breakfast?