Tuesday, September 17, 2019

All That and a Bag of …

This week’s Economist has a special section called (with the magazine’s usual penchant for puns) “Chips with everything.” It is about the inclusion of computer chips not just in items that we normally think of as high tech, but in things as mundane as lamps, doorbells, and fish tanks. (A Las Vegas casino’s security systems recently were hacked via its smart fish tank.) They talk to each other, too, as elements of pervasive Artificial Intelligences. Smart TVs (and it is getting hard to buy any other kind) watch the watchers and collect data on viewing habits, which they pass on to data mining AIs. Some TVs monitor conversations. Connecting them with the other household chips offers some interesting possibilities. With existing tech (though to my knowledge it is not yet done) a TV could, for example, enhance the experience of a horror movie by flickering your lights and turning fans on and off to make creepy sounds – and to judge for itself how and when to do it. Our cars already are apt to take over the driving (e.g. by braking or correcting for lane drift) from us if they think we are doing a bad job. Everything is getting smarter except possibly the people who increasingly rely on tech to do their thinking for them. One wonders if the driving skills of humans will atrophy as AIs take over. So, too, other skills.

Moore’s Law, which states that the number of transistors per chip doubles while the price halves every two years, continues to operate as it has for half a century. Repeated predictions that it was nearing its practical limit have proven premature. So our toys are ever smarter while their integration into the internet makes the emergent AIs ever eerier. For a time it seemed as though humans were getting brighter, too, albeit at a much slower pace. The Flynn Effect is the apparent increase in average intelligence that manifested in the 20th century. Average intelligence is by definition 100, so whatever the average number of correct answers on an IQ test might be in any given year is assigned the number 100. (The most common scale has a standard deviation of 15, meaning that 50% of the population falls between 85 and 115; 22.5% are between 116 and 130 and 22.5% are between 84 and 70; only 2.5% are above 130 and 2.5% below 70.) Yet, the raw number of correct answers on the tests increased year by year in the 20th century. Judging by the original 100-year-old tests results (without renormalizing the results each year), it appears that average intelligence in advanced countries rose 3 points per decade.

Something seems wrong about this. It means that the average test-taker in 1952 (the year I was born) was, by 2002 standards (and also by 2019 standards, but more on that in a moment), below normal intelligence (i.e. below 85); to put it the other way around, it means the average present-day test-taker is by 1952 standards intellectually gifted (above 115). This bothered James Flynn, who discovered the effect in the 1980s. “Why,” he asks, “did teachers of 30 years experience not express amazement at finding their classes filling up with gifted students?” They didn’t and they don’t. Quite the contrary. Despite much heavier homework loads, kids in high school today perform worse than students in the 1950s and 1960s. 12th graders have a smaller active vocabulary than 12th graders of 50 years ago. They have no better understanding of algebra or geometry, and without calculators their elementary math skills are worse. The SATs had to be made easier to keep the nominal scores from tanking. If you took SATs in the 1970s you should add 70 points to your verbal scores and 30 points to your math in order to adjust them to 21st century standards.

How then to explain the 20th century rise in raw scores on IQ tests? A large part of it is that education and scholastic tests have steadily shifted to focus more on abstract reasoning (which is what IQ tests measure) and less on nuts-and-bolts facts and skills. In short, advertently or otherwise, educators increasingly have taught to the exam. Flynn uses the example of crossword puzzles. Almost all neophytes are bad at them, but eventually one learns to anticipate puns and unusual uses of words. With practice one becomes good at them. Something analogous seems to have gone on in IQ tests: practice. (It should be noted that none of the improvement was at the high end, which is to say that the raw scores of the top 2.5% who are naturally talented at abstract reasoning didn’t get any better over the decades.) Some of the Flynn Effect may well be developmental however: a result of better childhood nutrition and health, in which case the effect should stall as good health and nutrition become more widespread. And, indeed, the effect stalled in Scandinavia in the 1990s and most of the rest of the West in the early 2000s. In fact they went into reverse. See the 2006 paper Secular declines in cognitive test scores: A reversal of the Flynn Effect by Thomas Teasdale and David Owen documenting decline in IQ scores in Denmark and Norway between 1998 and 2004: “Across all tests, the decrease in the 5/6 year period corresponds to approximately 1.5 IQ points, very close to the net gain between 1988 and 1998.” They surmised that the Flynn effect is at an end in advanced countries, though still at work in less developed ones. Similar results soon turned up elsewhere. A few years later The Telegraph noted, “Tests carried out [in the UK] in 1980 and again in 2008 show that the IQ score of an average 14-year-old dropped by more than two points over the period. Among those in the upper half of the intelligence scale, a group that is typically dominated by children from middle class families, performance was even worse, with an average IQ score six points below what it was 28 years ago.”

Just as the 20th century Flynn Effect was (in my opinion) more apparent than real, so too with the 21st century declines in IQ. In other words, I don’t think kids are biologically dumber than kids 20 years ago any more than I think they are biologically smarter than kids 100 years ago. Something in the cultural environment changed that affected their average scores. I don’t think it is an accident that the declines in First World countries coincided with widespread access to the internet, which brings us back to artificial intelligences and chips for all. Outside of school, young people give every appearance of being far smarter in everyday life than I and my peers were at their age, but it is an appearance facilitated by smart phones with endless information at fingertips. When we didn’t know something back in the 1960s, we just stumbled around not knowing it until we asked someone (who might give us the wrong answer) or looked it up in a book, map, or other non-digital resource. Compared to the instantly informed youths of today, we were idiots. The credit for the difference, however, belongs to Steve Jobs.

Trivial Pursuit was a game for 1984 (when sales peaked), not 2019 when anyone with a phone can get a perfect score – or defeat any Jeopardy champion. Yet there is a difference between information and knowledge. It may seem like a waste of brainpower to remember historical dates, or a Robert Frost poem, or the atomic number of sodium when those are just a finger tap away, but it isn’t. Creativity, insight, and comprehension are all about making connections between different bits of knowledge in our own heads, not floating out there in a cloud. Knowledge also helps us think for ourselves and be less susceptible to propaganda perfectly tailored (by those AI data miners who know us so well) to appeal to our existing biases. Yes, our machines can be programmed to make those connections for us in a way that simulates (or perhaps is) creativity, but at that point the machines become earth’s sentient beings and it is time to hand over to them the keys to the planet.

I’m not a Luddite. I’m happy to let my phone remember other phone numbers and to let my GPS (on the first trip, not the second – or at least not the third) tell me when to turn right. I’ve no wish to go off the grid. I like having a world of information in the palm of my hand. Our easy reliance on the internet does give me some pause though, as does the proliferation of chips, AI, and surveillance technologies. Increasingly – and perhaps irreversibly – we have too much information and not enough knowledge.

Quiet Riot – Too Much Information

Tuesday, September 10, 2019

Place on the Food Chain

The quintessential American cuisine has far less to do with the entrees themselves than with the process by which they are prepared and served. It is fast food of a wide variety of types from the likes of Dairy Queen, McDonald’s, Pizza Hut, Taco Bell, KFC, et al. We also must include the presentations of the fast food chains’ slightly more dressed up cousins: Red Robin, Applebee’s, Red Lobster, IHOP, and so on. The chains offer consistency and reasonable pricing from coast to coast, which was their original attraction at a time when food safety regimens at roadside restaurants were…well…inconsistent. Consistency still matters to travelers on road trips. Most customers nevertheless are locals. You can’t tell where in the country you are by the architecture or menu of a McDonald’s (with a few exceptions such as a flying saucer McD’s in Roswell), but you can tell by the customers. At a time when the nation is increasingly divided along class lines (and other lines too), the customers remain a full cross-section of the local population: rich, poor, and all the shades in between patronize fast food outlets. An interesting account of the restaurants’ history and current place in the culture is Drive-Thru Dreams: A Journey through the Heart of America's Fast-Food Kingdom by Adam Chandler.

Chandler tells us that few of the common offerings by the chains were invented by them; at most they tweaked the recipes a little. French fries, for example, go back centuries. Thomas Jefferson, for one, requested from the White House chef, “Potatoes, deep-fried while raw, in small cuttings, served in the French manner.” A few years later Dolley Madison famously served ice cream at the White House, which is why there is a brand named after her. (She liked oyster flavor, of all things, which fortunately did not catch on generally.) It is unknown who invented the hamburger sandwich. Hamburger without the bun was regarded as a German specialty in the late 19th century and could be found described as such on many American restaurant menus. Someone (or many someones) must have tried it on a bun, but who is not recorded. By the 20th century it was no novelty.

The modern sandwich version of the hamburger was popularized (once again, not invented) in Wichita, Kansas, in 1916 by Walt Anderson who sold burgers for 5 cents each in his restaurant named White Castle. At a time when ground meat was a chancy selection in many establishments, his open kitchen where the beef could be seen being ground fresh made his burgers a hit. So did the onions and spices he added. Promoter Billy Ingram happened to eat there and knew a hit when he tasted one. He teamed up with Anderson and took the chain national in 1921 with a franchise model soon copied by competitors. From the start the locations and site plans wherever possible were designed to be automobile-friendly, which Ingram rightly deemed a key to success. Despite being the earliest hamburger franchise, White Castle was far surpassed in popularity by several others after WW2, but it still has dedicated fans. I have friends who on some sort of principle won’t step foot in a McDonald’s or Burger King but who will nonetheless drive the extra miles to a White Castle.

The founder-plus-promoter combo was a pattern for success repeated many times in the 20th century. Someone would open a restaurant or roadside stand that proved locally popular; a promoter with a vision partnered up with the owner and created a franchise. The most famous case was Ray Kroc who in 1954 espied the innovative and very successful drive-in McDonald’s, operated by the brothers Mac and Dick McDonald in San Bernardino. (The Big Mac was named for Mac McDonald, which was probably better than naming it for his brother.) Within a decade Kroc turned the franchise into not just a national but a global brand.

Perhaps counterintuitively (or then again perhaps not), what the founders and the promoters of the successful chains seem to have had in common was growing up poor. What for most people would have been a serious obstacle became instead a singular motivation. Glen Bell (Taco Bell) worked a series of menial jobs after WW2 before opening a taco stand. Al Copeland (Popeyes) said, “I never forget being poor. I know what it is and I don’t want it.” Colonel Sanders was broke time and again from failed businesses before his fried chicken recipe took flight. (The “Colonel” is legit, by the way: not as a military rank but as an honorific awarded by the Governor of Kentucky.) S. Truett Cathy (Chick-fil-A) was one of seven siblings in a public housing project. William Rosenberg dropped out of school at 14 and delivered telegrams before opening Dunkin’ Donuts. So, too, many of the most successful franchisees: Aslam Khan, for example, was born into poverty in Pakistan and emigrated to the US in the 1980s. He started as a dishwasher at a Church’s Chicken, rose to manager in three months because of his reliability and hard work, and eventually became a business owner with 97 Church’s outlets. His career from dishwasher to multimillionaire took 13 years.

The chains take a lot of heat from advocates of healthy eating. McDonald’s notoriously was waylaid in Morgan Spurlock’s movie Super Size Me, which documented Spurlock’s month of eating only at McDonald’s thrice daily. Spurlock ate 5000 calories per day and the results were not good. McD’s apologists respond that in fairness it’s hard to imagine 5000 calories per day of anything for a month being good for you. One can see both points of view, but few people really go into a fast food restaurant expecting health food. Nutritional factors aside, it’s easy to be snobbish against the chains, especially in parts of the country where non-chain midrange providers still prevail. As it happens one of those areas is my home state of NJ; independently owned diners in particular are abundant here: 525 at last count in a quite smallish state. The chains exist to be sure, but they are not as prominent as in much of the country and are mostly to be found on major highways. As a single man who dislikes cooking for himself, I eat out for either breakfast or lunch at modestly priced establishments 5 or 6 days per week, but I haven’t bought from a chain in all of 2019 to date. That’s not meant as some social statement, however: it’s just a reflection of the local options. I have no objection to Wendy’s and its ilk per se. When I’m on a road trip they are my most likely lunch stops, and for the very same reason they were for folks in 1921.

One segment of the customer base, it must be acknowledged, has had some trouble ordering from the fast food menus: traditionally the chains have not been vegan-friendly. The ground is shifting a little in this regard as we are admonished from some quarters to get less of our protein from traditional farm animals and more from plants and insects. A few of the chains offer veggie burgers nowadays though I’m not aware of any offering bugs. I’m not eager to get on board with either personally as a consumer, but it does raise the possibility of fortunes to be made from brand new franchises that offer nothing else. Kentucky Fried Crickets perhaps.

The Smithereens – White Castle Blues

Tuesday, September 3, 2019


I have a far less pileous pate than I once did, but what hair there is still needs to be cut once a month or so. Well, strictly speaking it doesn’t need to be cut, but I don’t like the bedraggled look that otherwise ensues; I’m not enough of an old hippie to pull off the look successfully. I just look unkempt – and for some reason older and grayer. I still go to a traditional barbershop (not a stylist) in nearby Morristown that charges $15. I’ve been going there for decades: long enough to see three generations of clippers. It is a pleasant enough experience.

Not so when I was little. As a kid I hated getting haircuts. I don’t know why. I didn’t mind the result. Short hair was fine by me. (At age 6 I cut my own hair with scissors but made such a wreck of it that I needed a buzz cut to hide the damage.) There was just something about the barbershop process I didn’t like. Freud had an opinion that a dislike of haircuts stems from them being symbolic of cutting something else. Knowing Freud, you can guess what, but I’m quite certain that notion never crossed my mind as a child. By the time I was old enough to understand what Freud was talking about, the haircuts no longer bothered me. I didn’t really change my style of haircut dramatically over the years even though I was in high school in the late 60s when long hair on men became a social statement. I didn’t fully embrace either the style or the statement, so my hair was never longer than in the 1974 pic below (with my sister in San Francisco) and (except for that buzz cut incident) never shorter than it is now. In 1974, by the way, that length was pretty conservative, as was the lapel width, which looks excessive today. Most of my friends in the 60s and 70s (except in prep school, which had codes) were considerably shaggier.
1962                   1974                              2019

Upper middle class hippies largely gave up long hair when construction workers and other working class young men started wearing ponytails in the mid-70s. Author Tom Wolfe had predicted this and wasn’t above patting himself on the back for getting it right. Some genuine hippies stuck with it, of course. In his book The Baby Boom: How It Got That Way (And It Wasn't My Fault) (And I'll Never Do It Again) P.J. O’Rourke apologizes to subsequent generations for having “used up all the weird” in hair and clothes, thereby forcing them to rebel in more painful ways such as piercings and extensive tattoos. The current crop of Millennials and iGens (aka Gen Z) demonstrates he was wrong about that. Some of them sport hairstyles that would have startled folks at Woodstock in 1969. The hipster thing of ironically retro facial hair is also very non-60s/70s.

I obviously don’t have firsthand experience of the social implications of hair from the female perspective, but there are many books on the subject including Rapunzel's Daughters: What Women's Hair Tells Us about Women's Lives by Rose Weitz, a sociologist at Arizona State. She notes, for example, that (in general…always in general) straight hair comes off as more conservative and curly hair as more informal. Long hair is regarded as more sexually attractive but short hair as more professional. She recalls the remark of a female exec regarding the corporate management hierarchy that “you could draw a line: Above that line, no woman’s hair touched her shoulders.”

An interesting little book that covers the subject of hair compendiously, if not in great depth, is Hair by Scott Lowe. Hair, as Lowe writes on page 1, “has an incredible power to annoy your antagonists, attract potential lovers, infuriate your neighbors, upset your parents, raise eyebrows at work, find compatible friends, and allow you to create, or recreate, your identity.” Sometimes the style is a deliberate statement, as was the Afro in the 60s/70s, and sometimes it is an old tradition (such as beards in certain Islamist sects and among the Amish) that it would be a deliberate statement to buck. Metal rock bands of a certain flavor are called Hair Bands for obvious reasons. Shorn heads have long been a symbol of shame: famously, collaborators with Germans were forcibly shorn in liberated areas in WW2. Sometimes, however, shearing one’s head is a religious rite as among the Jains. There is, in short, a lot of symbolism emanating from those head and face follicles. Interpreting it across cultures can be a challenge.

For myself, however, I just sit in the barber chair and say, “Just shorten it up for me please.” “You got it,” is the usual response. There is no doubt a lot of symbolism in that simple exchange, but I’ll leave figuring out what it is to others – preferably not Freud.

George Thorogood and the Destroyers – Get A Haircut

Tuesday, August 27, 2019

Word Therapy

I’ve been fortunate in life in so many ways – so far, as one must always qualify. While I’m well aware that good health is always at risk of vanishing without notice, mine to date has been generally robust despite a family history that stacks the odds against it. (I am the last of my immediate family still standing, as I have been for nearly two decades.) Whatever health issues have arisen have been my own fault, e.g. caps for teeth (floss more!) or, just last week for the first time ever, a touch of gout in the right foot (eat better!) that cleared up in a few days. Another piece of pure luck (I obviously had nothing to do with it) is having grown up in a caring and supportive family. There were no psychically destructive childhood horrors to overcome later in life beyond the inescapable ones faced even by the Beaver and his brother Wally. I managed to take some very wrong turns as an adult anyway (the consequences of my choices have not been as charmed as those of French Stewart in the 3rd Rock clip below), but like the health issues they have been entirely my fault – and temporary.

There is a curious downside to a wholesome upbringing and minimal trauma beyond the inescapable ones. (We all lose people along the way.) Creativity thrives on bad experiences. To be sure, a Beaver-esque soul can self-motivate to create anyway and to seek out experiences. Hemingway had nice parents and a comfortable upper-middleclass childhood yet did pretty well as a writer for example. (For all his towering reputation, Hemingway wrote plenty of absolute garbage by the way, but when he was good he was excellent.) Yet trauma when it doesn’t crush completely (as it too often does) can drive people to create and produce in a way that prosperity struggles to match. I remember a decade ago in the cinema watching the movie The Runaways based on Cherie Currie’s autobiography Neon Angel. Despite (or because of) a dreadful home life, Currie became the band’s lead singer at age 15. A friend with whom I was watching the film commented, “You know, our families were just way too functional.”

“Yes,” I agreed. “It ruined our careers,”

We were joking, of course, but not entirely. In truth, my most productive period for fiction including my one full-length novel was in the few years following the worst years of my life. (Years, once again, for which I blame only my own choices.) Words flowed more easily onto pages than they had before or have since.

These somewhat rambling thoughts came to mind after reading Becoming Superman: My Journey from Poverty to Hollywood with Stops along the Way at Murder, Mayhem, Movie Stars, Cults, Slums, Sociopaths, and War Crimes, an autobiography by J. Michael Straczynski. Outside of scifi fan circles Straczynski is not a household name, but the work of this prolific novelist, journalist, comic book writer (for both Marvel and DC), TV animation scriptwriter, live-action TV scriptwriter (Babylon 5 was his personal creation), and screenwriter (Changeling, Thor, World War Z, et al) is almost impossible to avoid. By “becoming Superman,” Straczynski does not mean attaining “powers and abilities far beyond those of mortal men.” He means the adoption of a moral code and personal outlook attainable by anyone.

To call Straczynski’s childhood impoverished and hellish would be to make it sound far too pleasant. Straczynski describes his father as a hard-drinking literal Nazi (he kept the WW2-vintage uniform in the closet) con man who kidnapped his mother from a brothel in Seattle and then beat her and their children mercilessly over the course of their lives together. The physical abuse was matched by shockingly mean-spirited and manipulative verbal kinds. His mother, he tells us, had mental issues and tried to kill him as a boy at least twice: once by smothering him and once by pushing him off a roof. He was caught by wires when falling from the roof, preventing him from splatting on the concrete. They were not June and Ward Cleaver. Straczynski retreated into comic books, identifying especially with Superman’s quest to fit into a world in which he didn’t really belong. It would have been easy enough to be crushed by a family background like this but Straczynski saw his path out his mess when, yet a young teen, he realized he didn’t have to be like his father. He could choose to be different: “The most important aspect to negate was my family’s sense of victimization… They believed that since they had been mistreated, they were entitled to do the same to others without being questioned or criticized.” Refusing to be defined by his abusive family “would allow me to decide what I wanted to do with my life” and the kind of person he wanted to be. Straczynski doesn’t make light of the physical, social, and psychological obstacles to be overcome with this kind of starting point, but in existentialist fashion stresses the ultimate power of choice.

Writing became Straczynski’s therapy, as it is for so many authors. In one of those odd twists that sometimes happen in life, Straczynski was tagged by DC back in 2010 to write three Superman graphic novels. The character still resonates with him: “Being kind, making hard decisions, standing up for what’s right, pointing toward hope and truth, and embracing the power of persistence…” He tells us we all can do that if we choose. “We just have to be willing to choose. That’s it. That’s the secret.”

Straczynski’s awful life experiences are an endless resource that gives depth and breadth to his stories. Nonetheless, as much as I admire good writers, I wouldn’t trade my far more comfy upbringing for any Hugo Award. If that ranks ease and peace of mind over art, so be it.

French Stewart’s rendition of Randy Newman’s Life Has Been Good to Me

Tuesday, August 20, 2019

Seeing Red

An unwelcome visitor lay on my lawn this morning: a red leaf. Summer lasts officially until the equinox (September 23 this year) and unofficially (in the US) until Labor Day (September 2 this year). Either way August is solidly summer, the temperature today outside my door is a tad over 80 (that’s 27 by the scale everyone else in the world uses), and red leaves have no place on my lawn. Yet, 18-year-olds already have left for their overpriced colleges while Halloween candy is already appearing on supermarket shelves. The Fall Field Crickets (which look and sound the same as Spring Field Crickets but are different species nonetheless) are chirping loudly. The sun spends less time in the sky each day.

Incidentally, it is a curiosity that red leaves, whether they arrive too early or on time, are more common in the northern states of the US and in Canada than in northern Europe despite similar climates. The natural breakdown of chlorophyll at the end of summer is enough to change leaf hues to yellow or orange (colors that dominate in Europe in the autumn), but turning a leaf red requires the active synthesizing of anthocyanins. Why would a plant bother making the extra effort? One suggestion (by Yev-Ladun, S. and J.K. Holopainen in New Phytologist 2009) is that there is a different mix of aphids and other pests on each continent due to their different geological and evolutionary histories. Red warns off more of the particular pests in North America in the same way that bright colors on insects, amphibians, and reptiles often warn predators that they are poisonous. It’s an evolutionary adaptation.

As that may be, I’m not ready yet to see those reds. Some people talk to their plants. (Do they find it harder to be vegetarians?) Perhaps I should give my trees a stern talk on the need to stay green at least until September 23. I doubt it would help but I suppose it wouldn’t hurt either.

Another harbinger of autumn is relatively recent – in fact, it’s the reverse of when I was a schoolboy. Kids reappear. Back in the ancient days of my youth unattended school-age kids were everywhere during the summer vacation months: on bicycles, in yards, in the shopping center, in the parks, on sidewalks…everywhere. I was one of them. Nowadays they vanish for the summer. They have to be somewhere. I suppose they are either inside or at organized activities (on soccer fields, for instance) that as a non-parent I simply don’t see. In early September, however, they appear at the curbside in the mornings waiting for school buses and again in the afternoons when they return. On my neighborhood cul-de-sac street there appear to be about 30. In August, however, they are nowhere to be seen. For much the same reasons, I’m in no more hurry to see them at the curbside than red leaves in trees.

So, I’ll laze in late summer while I can. (There is a metaphor about stage of life in there somewhere.) The jack-o-lanterns can wait.

The Orwells – Last Days in August

Tuesday, August 13, 2019

Gone West

Every war is horrific. That’s part of the definition. The cost is calculated first and foremost by the casualties. Yet that is not the sole cost – primary, but never sole. Societies are affected very differently by different conflicts: thriving after some (e.g. the US after WW2) and sickening after others (e.g. the Weimar Republic after WW1). Nor is there always an obvious correlation between the affect (yes, the spelling with an “a” is intended) and the size of the conflict or its battlefield outcome. Italy was on the winning side of WW1, for example, but had a bad aftermath. The Vietnam War was deeply unwholesome in its affects and effects in the United States in a way that Korea, for example, was not. The US was not so very different a place in 1953 than in 1950. The US was a vastly different place in 1973 (for that matter by 1968) than in 1965, and the war had much to do with it. Vietnam broke something in the American body politic and the US never really recovered from it – the military did but not the US as a whole. The deep divisions which ail us so much today have their roots in the era: among them wholesale (all too often justified) distrust both of officialdom and the press (the “credibility gap”) and a polarization of the public that led to more extremism and violence than we tend to remember.

It is anyone’s guess what would have followed had the Johnson Administration opted not to introduce large ground force units into Vietnam – or opted not intervene further at all. (It should be remembered that JFK already had upped the number of “advisers” there by 16,000 in 1963 and their numbers climbed to 23,000 by 1965, but they weren’t officially combat troops and Johnson could have ordered them withdrawn without overmuch fanfare.) What if the war went ahead but had gone differently? We only can speculate, but Lewis Sorley makes the case that it could have gone very differently with a single change at the top in his book Westmoreland: The General Who Lost Vietnam.

Whenever I see a title like that about some military figure I check the author’s credentials before checking out the book. If it contains the judgments of a purely armchair strategist…well… consider the source. It’s hard to find credentials much better than Sorley’s however. A West Point graduate with a PhD from Johns Hopkins, he led the 1st Tank Battalion, 69th Armor, US Army in Vietnam in 1966 and 1967. He retired from the army as a Lt. Colonel in 1975 but then moved on to the CIA as Chief of the Policy and Plans Division. He is also a well-regarded historian. That’s plenty of credibility, but for an alternate view I nonetheless paired Sorley’s book with a re-read of General William Westmoreland’s A Soldier Reports, a memoir I first read more than 40 years ago.

Prior to his time in Vietnam Westmoreland had a solid and soldierly military record. He graduated in the middle of his class at West Point, led the 34th Field Artillery Battalion in Tunisia and Sicily in 1943, and commanded the 187th Airborne in Korea in 1952. At the end of his time in Korea he was promoted to Brigadier and sent to the Pentagon where he worked as a staff officer on everything from personnel to budget matters. He was fully capable in those roles and accordingly added stars. His competence won him the confidence of his superiors, particularly General Taylor and General Wheeler who in 1964 tagged him for Commander MACV (Military Assistance Command Vietnam) in overall charge of US operations there.

Yet, his previous experience in war zones was, though high level, subordinate; one gets the feeling that he was excellent at being second in command. Nothing in his career indicated imaginative or flexible thinking in tactical or strategic matters. This alarmed some of his fellow officers when they heard of his selection. Brigadier General Amos Jordan actually went directly to Secretary of the Army Cyrus Vance to thwart the appointment, saying that “it would be a grave mistake to appoint him. He is spit and polish two up and one back. This is a counterinsurgency and he would have no idea how to deal with it.” Vance heard Jordan out, but told him the appointment had been made. Also concerned was Major General Yarborough commanding the US Army Special Warfare Center at Fort Bragg. Yarborough sent Westmoreland an unsolicited 8-page letter arguing against conventional warfare using American troops: “Under no circumstances that I can foresee should US strategy be twisted into a ‘requirement’ for placing US combat divisions into the Vietnamese conflict…The key to the beginning of the solution to Viet-Nam’s travail lies in a rising scale of population and resource control.”

Westmoreland disagreed. He opted for a conventional big unit strategy using American and allied troops to seek out and engage enemy formations, leaving the more difficult task of territorial control to the ill-equipped ARVN (Army of the Republic of Vietnam). The Johnson Administration accepted his recommendations and kept delivering on his repeated requests for more troops until finally denying the last one for 206,000 more in 1968, at which time 525,000 already were in country. In his own memoirs Westmoreland complains, “In lamenting what came to be known, however erroneously, as ‘the big-unit war,’ critics presumably saw some alternative…Yet to my knowledge no one ever advanced a viable alternative that conformed to the American policy of confining the war within South Vietnam.” But they did. General Abrams, who really was an imaginative thinker, had different ideas, though by the time he succeeded Westmoreland, he was prevented from starting from scratch by the need to unwind and rework the force structure already in place amid “Vietnamization” and by vanishing support for the war at home. During Westmoreland’s tenure an attrition war inflicted huge losses on the enemy, but they were losses they were prepared to accept and replace. The effect has been described as fighting the birthrate of North Vietnam. Americans, on the other hand, were not prepared to accept steady losses just because they were fewer.

The alternative always had been properly equipping and supporting ARVN to defend its own country. Ironically the large scale introduction of US troops prevented this for years since naturally US troops had first call on those supplies. To take one example, the Vietcong with their AK47s consistently outgunned the ARVN, who used WW2 vintage M1 rifles. (Quite aside from its lesser firepower, the M1 is an 11.6 pound [5.3kg] weapon more than 3.5 feet [1100mm] long; the average ARVN soldier was 5 feet [151cm] and 90 pounds [41kg]). Not until 1969 were M16s supplied to the South Vietnamese in large numbers. The two books take very different perspectives on Tet and Khe Sahn, both of which were serious tactical defeats for the VC and North Vietnamese but which succeeded in further turning American public opinion against the conflict.

Both books conclude that the war could have ended far differently. “Sadly, it could have been otherwise,” said Westmoreland in his memoirs, but “otherwise” meant having followed through on progress he firmly believed to have been made by 1968, which sounds a lot like more of the same. Sorley laments the “waste” of 5 years prior when public support for helping South Vietnam was robust and an alternative strategy had a chance. Perhaps he is right that with a different commander during those years there wouldn’t be that wall in DC today, and, less importantly but still notably, we might be a less broken polity.

Country Joe and the Fish - I Feel Like I'm Fixin' To Die Rag

Wednesday, August 7, 2019

Dusty Shelves Revisited, Part 2

My DVD re-watch/discard project (see July 25 blogpost) for thinning out my DVD library continues. This time the thin-out portion of it was more successful. Once again, the plan was to pick a disc randomly from each shelf (out of a total of 16 shelves) with the intention of rating each pick as 1) re-watch and keep, 2) re-watch and discard, or 3) discard at once. Last time I re-watched and kept every one of the picks from the first 8 shelves. The results from shelves 9-16 are below. I modified the plan slightly when my first pick was a “discard at once.” When that happened I picked again from the same shelf until I found one I was willing to re-watch. No shelf required more than two picks. It has not escaped my attention that it was easier to find “discards” among the films made since 1980 than among those of older vintage. That 6 of the 8 re-watches had murder central to the plot is a coincidence; that’s a much higher proportion than in the shelf contents in toto.

Angel (1984). Unapologetic trash. (The movie has no connection whatsoever to the TV series Angel.) A 15-y.o. prep school girl moonlights as a hooker and befriends the oddball street people on Hollywood Blvd. A psycho killer is prowling the streets and killing prostitutes. After the murder of a friend at his hands, Angel grabs her landlady’s gun and goes after the killer herself. I previously had deemed this un-shelfworthy but through sheer neglect didn’t remove it. Though undeniably it has guilty pleasure elements, it’s still a Discard.

**** ****
Psychos in Love (1987): Two psycho-killers find each other in this cult movie filmed for $75,000. Not only do both love to kill but they discover that they both detest grapes. True love ensues. This is definitely not for everyone, but if your silly streak extends far enough into the dark side, you might chuckle at this. A close call, but a Discard.

**** ****
The Doom Generation (1995): In the 90s there was a bumper crop of mainstream ultraviolent films: Goodfellas, Natural Born Killers, Pulp Fiction, and more. Most of them were not just gore-fests but had something to say. I suspect director Gregg Araki found what they had to say pretentious. His very Gen-X film The Doom Generation is simply nihilistic. The meaning of its violence is that it is without meaning. The three main characters (two young men and a young woman) are fazed very little by the brutality they encounter. As for their personal affections, none takes sex seriously enough to demonstrate a twinge of jealousy in their intimate bisexual triangle. Their lives are hell – whenever they buy something the price is $6.66 – but they don’t seem to care. Johnathon Schaech (Xavier Red) and James Duval (Jordan White) play their roles well enough but Rose McGowan (Amy Blue) steals every scene. (Yes, the nominal color scheme is a bit heavy-handed.) I’d recommend this movie only to those with a particular kind of off-beat world view and a tolerance for graphic cinematic violence. Another close call, but Keep.

**** ****
House of 1000 Corpses (2003): Discard at once without rewatch. This Rob Zombie film is not bad for its type, but I don’t care much for its type.

**** ****
He Was a Quiet Man (2007): The title comes from the comments we hear all too often from neighbors and co-workers about some multiple murderer. You know them: “He was a quiet man…Very polite… He seemed so nice… He always said ‘good morning’ to me... He was a loner.” Bob Maconel (Christian Slater) is an office worker with a dreary job and horrible co-workers. He is schizophrenic and has two-way conversations with his goldfish. Day after day he loads and unloads his gun at his cubicle, trying to work up the temerity to kill all his co-workers except for one named Venessa who has a nice smile. On a day full of particularly degrading treatment, he seems ready finally to do it, but he drops a bullet while loading his gun. As he reaches down for it, shots ring out and bodies drop to the floor: another worker has gone postal first. Venessa is among the victims but is still alive. Bob kills the shooter before he can finish off Venessa. Instead of being a villain as he intended, Bob is a hero due purely to timing and butterfingers. He visits Venessa in the hospital and finds that she has been left quadriplegic. She asks him to end her life. Bob has to decide how to handle her request. This is not a bad movie, but I don’t think I’ll ever want to watch it again. So, Discard.

**** ****
Forgetting Sarah Marshall (2008): There is a type of raunchy broad comedy that doesn’t really appeal to me. The reason has nothing to do with high standards, which I don’t profess to have. I object neither to graphic sex nor low humor in film or anywhere else. There is just a peculiar blend of the two that leaves me waiting impatiently for a scene to end while much of the audience around me guffaws loudly: for example, the scene in Forgetting Sarah Marshall in which couples in neighboring hotel rooms try to outdo each other in noisy sex in order to make each other jealous. I’m not in the least offended by it. It just doesn’t make me laugh.

Sarah Marshall (Kristen Bell) is a TV sitcom star who dumps her boyfriend Peter (Jason Segel), the music composer for the TV show. The depressed Peter tries to get over Sarah with a series of one-night stands. These don’t help, so he vacations in Hawaii to clear his head. In a coincidence of the kind that happens in movies and nowhere else, Sarah books into the same hotel with her new boyfriend, pop singer Aldous Snow (Russell Brand). The hotel clerk Rachel (Mila Kunis) feels sorry for Peter and tries to help.

By the standards of its genre the movie is pretty good even though Peter is not nearly as sympathetic a character as he is intended to be. It’s just not my kind of movie. Discard.

**** ****
Killers (2010) with Ashton Kutcher and Katherine Heigl. Discard at once without rewatch. Uninteresting and unexciting spy/comedy movie.

**** ****

Home Sweet Hell (2015): This is another Katherine Heigl vehicle. I like Heigl as an actress, but her films since 2000 (even though a few were commercially successful) have ranged from mediocre to dreadful. Home Sweet Hell is mediocre. In this self-styled comedy Mona (Heigl) has specific goals that she pastes in a scrapbook. She schedules everything including sex (six times per year) with her husband Don. Don cheats with a young woman named Dusty (Jordana Brewster) who then blackmails him. Don decides the least bad option is to come clean with Mona. Mona coolly decides to kill Dusty so she and Don can stay on track toward her goals. Are you feeling the humor yet? Me neither. Discard.

**** ****

Nerve (2016): Discard at once without rewatch. It’s about a hardcore online version of Truth or Dare. OK, but not worth a re-watch.

**** ****
How to Be Single (2016) Some *Spoilers* follow. There are several overlapping plots, but the central story is that of Alice (Dakota Johnson) who feels she never has experienced being truly single. She always has been in some relationship. So, she tells her boyfriend Josh they should take a break from each other. That way, if they do end up back together, they won’t blame each other for having missed out on life. Alice moves to Manhattan where her sister Meg is a single-by-choice OB/GYN. Alice meets Robin (Rebel Wilson) at the law firm where she gets a job as a paralegal. It doesn’t take much freewheeling casual sex for Alice to decide the experiment is over; she contacts Josh to resume their relationship but discovers he has moved on and plans to marry someone else. There are side plots with other young women. Tying the plots together is Tom the bartender who offers string-free sex and bartender-style philosophy. Alice’s epiphany (the reason for the spoiler alert) comes when she realizes that she has not been learning how to be single because she has kept trying to become part of a couple. The final scene has Alice watching and savoring the sunrise alone in the Grand Canyon, something one can do at a whim when single, but far less spontaneously when not. Being truly single can be a pretty cool thing at any time, but especially in one’s 20s. Close call, but Keep.

**** ****
Ouija: Origin of Evil (2016) Discard at once without rewatch. Snooze.

**** ****

Ingrid Goes West (2017) The smartphone co-stars in the film with Aubrey Plaza. Ingrid (Aubrey) is a mentally troubled young woman with a horrible self-image and difficulties making real friendships. Retreating to her phone, she becomes a follower of Instagram star Taylor Sloan (Elizabeth Olsen) who posts about her fabulous California lifestyle of sun, fun, fashion, and joy. When Ingrid inherits money from her mom, she uses it to move west and become part of Taylor’s life, which she does by secretly stealing her dog and then returning the “found” animal. Ingrid values her own life entirely by the likes and shares on her own Instagram account and by her inclusion in Taylor’s social media. The consequences are bleak and credible. A timely movie: Keep.

**** ****

Well, that’s a better result than last time: 9 discards. The project started to seem like work rather than whimsy however. I’ll take a breather before considering starting at the top shelf again for a Dusty Shelves Revisited, Part 3.

Cake - End of the Movie