Eating with a Knife and Fork–American Style vs. European Style

Eating with a Knife and Fork

In cultures where the knife and fork are used for eating, there are three accepted ways of eating: European style (also called “Continental style”), American style, and a synthesis of the European and American styles.

In the European method, the fork is almost always held in the left hand, tines down, and the knife, held in the right hand, is used to push and then compact food onto the down-turned fork before the food is conveyed to the mouth with the fork, tines downward. Likewise, when meat is being cut, the fork, being held in the left hand, is used to spear and secure the meat, tines pointing downward, while the knife, being held in the right hand, is used to cut off the the desired portion. Once cut, the desired portion is conveyed to the mouth with the fork, tines downward. In the European style, the only time a fork is held in the right hand is when it is not being used in conjunction with a knife. In such cases, the fork is transferred to the right hand, and the food is conveyed to the mouth, tines pointing upward.

In the American style, the fork is switched between the left and right hands, depending on the circumstances. When eating anything that does not need cutting, the fork is held in the right hand, tines upward, with the knife placed either vertically onto the far right side of the plate or in the “three o’clock” position, with the handle resting on the table and the blade pointing into the plate. When something must be cut before being conveyed to the mouth, the fork is switched to the left hand, the knife is held in the right hand, and the fork, tines down, is used to spear the item that is to be cut, holding it in place as the knife is used to cut off the desired portion. Once cut, the knife is laid onto the plate (in one of the two placements described above), and the fork is switched to the right hand. The food is then conveyed to the mouth with the fork held tines upward. In the strict American method, even if successive portions are to be cut, the fork is switched to the right hand each time food must be conveyed to the mouth.

The American style evolved out of necessity: In North America, the fork did not become a popular eating utensil until the 19th century; for the most part, only spoons and knives were used. So when food had to be cut, it was held in place with the spoon held in the left hand, and the food was cut with the knife held in the right. But since a spoon cannot spear food, in order to convey to the mouth whatever was cut off, the diner would have to place the knife onto his plate, then switch the spoon to his right hand to then convey the cut-off portion to his mouth, obviously with the bowl of the spoon turned upward. When forks became fashionable in the United States during the 19th century, the method of switching hands simply carried over to forks.

Many diners find strict application of the American method—with all its hand-switching—to be too cumbersome, especially when cutting off several items successively. Hence, the very popular (even in North America) synthesized method, which combines the European and American styles: The fork is held in the right hand, tines upward, to eat whatever does not require cutting—meanwhile, the knife is placed onto the plate, whether vertically or in “three o’clock” position as described above. When an item must be cut off, the fork is switched to the left hand, tines pointing downward, as it spears and holds in place the item to be cut with the knife, held in the right hand. The cut-off portion is then conveyed to the mouth with the fork, held in the left hand, tines pointing downward. And if items are to be cut successively, the cut-off portions are conveyed to the mouth with fork, tines down, held in the left hand. The fork is returned to the right hand only when the diner wishes to eat something that does not require cutting.

But regardless of the method used, it must be done with dexterity. Food and drink should be gracefully conveyed to the mouth while an upward, though natural and relaxed, posture is maintained. The mouth should not be carried to food and drink. And it is critical that a gentleman maintain his elbows sufficiently at his side so as not to interfere with diners sitting adjacent to him. Also, there are few things more embarrassing in life than to have whatever is being cut, end up—along with everything else on the plate—onto a hostess’ stark-white, linen damask tablecloth. It is therefore imperative that a gentleman pay close attention to what he is doing while eating. Cutting must look effortless—even if it isn’t. And if getting the last morsel, no matter how delicious, might risk an accident at the table, that morsel would be better left uneaten. A gentleman must choose his battles. And lamb chops have been known to defeat many a gentleman at the formal dinner table.

Advertisements

The History and Evolution of Men’s Ties

Ties

History and Evolution

Of all the articles in the Western man’s wardrobe, two are most capable of conferring immediate “status”: the jacket and the tie. But even more so than a jacket, which may also serve the practical purpose of providing warmth, a tie is for the most part a purely decorative, superfluous item, its primary present-day purpose being to distinguish the gentleman from the man, the executive from the laborer, the powerful from the vulnerable. (Yes, a boy scout or military man who wears a neckerchief or bandana is taught to use his tie as a first-aid implement, but such practical uses of the tie are much more the exception than the rule). If today’s suit is the equivalent of the medieval armor, then the tie is the counterpart to the sword. And when society wants to go about the business of cultivating modern-day knights, one of its first acts is to require the wearing of ties. Even little boys in the primary grades all over the world don ties. The tie puts everyone on notice that a man to be reckoned with is in the midst.

But whether bow or bolo, cravat or ascot, most men wear ties without giving much thought to the accessory’s long and storied history. The pictorial record indicates that the ancient Egyptians did not wear ties. But the ancient Chinese, some three thousand years later, apparently did—as evidenced by the life-sized terracotta soldiers unearthed in the 3rd century B.C.E. mausoleum of China’s first emperor, Ch’in Shih Huang. Remarkably, each of the almost eight thousand soldiers—no two alike—is depicted wearing a scarf-like necktie neatly tucked into his armor. It is also said that Roman orators of the 2nd century C.E. would wear neckerchiefs to keep their vocal cords warm. But the widespread use of ties in the Western World occurs in 1660 when uniformed Croatian soldiers visited Paris, France in celebration of their victory over the Ottoman Empire. Each Croat soldier wore a brightly colored scarf of silk as part of his uniform, the accessory not going unnoticed by the French king Louis XIV, who had a penchant for personal adornment. Shortly thereafter, the king established a regiment of Royal Cravattes and made cravats an insignia of royalty. So fashionable Frenchmen began wearing neck-cloths—so much so that the French word for tie, “cravate,” is believed to derive from “Croat,” the tie-wearing culture that introduced the accessory to the French.

By 1692, the “steinkirke,” a neck-cloth with long, lace ends, which is worn in a nonchalant, disheveled manner, had risen to prominence. (It is said that the imprecise manner of tying and wearing the accessory originated from the Battle of Steinkirke, where French soldiers were caught by surprise and, in their attempt to hastily dress, wound their cravats around their necks then tucked the ends into the buttonholes of their uniform jackets). By 1784, the neck-cloth had transcended the military uniform and had become a means of individual expression in civilian dress. And it is that advancement, attributed to the very fashionable Beau Brummel, that would set the stage for use of the modern tie as a means of declaring and defining personal taste.

In 1818, Neckclothitania , an illustration depicting 14 popular styles of tying a neck-cloth, was published, partly as a satirical document. It is in this published illustration that the neck-cloth or cravat is first described as a “tie”; and by 1840, the word “tie” had surpassed “cravat” in the lexicon. It was also in the first half of the 1800s that menswear saw the rising popularity of scarves, bandanas, and neckerchiefs, the ends of which, rather than being tied into a knot, were passed through a finger-ring or scarf-ring at the neck.

The faster pace of life ushered in by the Industrial Revolution (1760-1840) demanded that men devise simpler, more practical ways of tying ties. In response, by the end of the Revolution, ties were being designed longer and thinner; they were comfortable and practical in the workplace; and they were easier to knot and did not become undone during the course of the day. The year 1864 marks the beginning of the mass-produced, ready-made tie, which became especially popular in Germany and the United States. And it is the ties designed between the 1860s and 1920s—the bowtie, the ascot, the cravat, and the long-tie—that remain popular and are worn by millions of men in much of the Western and Western-influenced world.

One of the most significant advances for the popular long-tie occurred in 1926 when New York tie-maker Jesse Langsdorf came up with an ingenious idea for enhancing the long-tie’s ability to knot luxuriously and retain its form—rather than eventually stretching out of shape as a result of frequent tying and untying. Probably inspired by French fashion designer Madeleine Vionnet, who in the 1920s caused a sartorial sensation with her fluid, liquid-like, bias-cut dresses, Langsdorf decided to cut the fabric for his ties on the bias (diagonally across the grain) and to make the ties in three parts. Cutting a woven fabric on the bias capitalizes upon the natural elasticity in the fabric, thereby allowing ties cut accordingly to knot with more suppleness then return to their original shape after being untied.

The end result of cutting long-ties on the 45-degree bias is perhaps most evident in striped ties, the stripes slanting rather than running horizontally or vertically. Traditionally, ties with stripes slanting downward from left to right are European-made; and ties with stripes slanting downward from right to left are American-made. Striped ties, today one of the most popular patterns of men’s ties, date back to the 1880s when the British military decided to abandon brightly colored uniforms (such as “red coats”) in favor of khaki and olive-drab uniforms in order to camouflage. The earlier, bright uniform colors were maintained, however, in the striped ties worn with the neutral-colored uniforms. Eventually, because of their military origins, the striped pattern in ties came to be called “regimental stripes.” The tradition of the club tie (as well as university ties) is also believed to date from the 1880s when it is said that the young men of the rowing team of Exeter College removed the striped bands from their hats and tied the bands around their necks.

Another tie-advancement during the late 1920s is credited to Richard Atkinson & Company of Belfast, Northern Ireland: The technique of using a slip stitch (an “invisible” stitch made by hand) to connect the tie’s lining and interlining once the tie has been folded into shape.

Perhaps there is some truth to the notion of men associating—subliminally or otherwise—the long-tie to swords or other weapons, for after World Wars I and II, ties took on a decidedly colorful look as if in deliberate contrast to military uniformity. Post-World War I ties, for example, were often hand-painted. And in the 1940s, after World War II, ties were so flamboyantly patterned that the look would come to be called the “Bold Look.”

Ties have also long been associated with the phallus. And when it comes to ties, size does matter. In the 1950s, for example, ties became as wide as 5” (13 cm), then in the 1960s, they were as narrow as 1” (2.5 cm). To a large degree, tie widths are determined by other fashion trends—especially those related to jackets—such as lapel width and overall silhouette. However, moderate widths, anywhere between 3” – 3 3/4” (7.6 – 9.5 cm), are regarded as the “classic” width.

What a tie lacks in width, it can sometimes compensate for with length: Over the years, depending on fashion, long-ties have varied in length from 48” (120 cm) in the 1940s to 57” (140 cm) in the 1980s to as much as 65” (165 cm) in the 21st century. (And, of course, custom-made ties may be made to specification—sometimes being made to over 70” [179 cm] in length for very tall men). In the 1940s, for example, when men routinely wore vests (also called “waistcoats”), ties were generally shorter since much of the tie would be concealed by the vest. But in the 21st century, with low-rise pants and the modern tendency for a gentleman to wear his pants on his hip rather than on his natural waistline (in the vicinity of his navel), as was the case in earlier years, ties have had to be manufactured longer so that their end(s), after the necktie has been tied, can extend to approximately the mid-way point of a gentleman’s belt buckle. A tie that is tied too short or too long looks immediately incorrect. Each tie, therefore, has a “sweet” spot called a “knot spot”—the precise spot where the initial overlap in the knot is placed so as to achieve the desired finished length—and a gentleman must sometimes tie and untie his tie twice or thrice in order to identify a particular tie’s “spot.”

Types of Ties

But as indicated above, long-ties are not the only neckties. There are also bowties, bolo ties, ascots, cravats, and neckerchiefs, for example, each with its own domain.

Considered the most formal of ties, bowties are traditionally regarded as de rigueur for white tie and black tie occasions; but bowties, sometimes referred to as “butterflies,” are also popular with schoolboys and professors and with businessmen and politicians—especially in the United States.

Several southwestern states, beginning with Arizona in 1971, have designated the bolo tie (also called the “shoestring tie”) the official state tie. The exact origin of the bolo tie is uncertain. It’s name, however, is believed to derive from “boleadora,” the Argentinean throwing-weapon consisting of interconnected cords to which a ball-like weight is attached to each end. Some fashion historians date the bolo tie to the 1860s or 1880s; but for certain it was in existence by the 1940s—so much so that by 1959, Arizona silversmith Victor Cedarstaff, who helped to popularize the tie in the ’40s, received a patent for a “slide,” the decorative, ring-like device that holds the strings of the tie in place at the neck.

A cravat and an ascot are two separate and distinct ties—unbeknownst to many men. And to further add to the confusion is the fact that it is a cravat, not an ascot, that may be worn to the Royal Ascot, though most often it is a long-tie, not the cravat, that is worn with morning dress in the Royal Enclosure at that great event. Remembering which is which, then, can prove for some men to be as convoluted as the legendary Gordian knot. Between an ascot and a cravat, the ascot is the more casual and, arguably, debonnaire. It is worn directly on the neck and tucked inside the partially unbuttoned shirt. The cravat, on the other hand, is worn on the outside of the shirt like other ties, with the shirt buttoned up to the neck. Neckerchiefs and bandanas are the simplest forms of ties. While scouts and paramilitary groups oftentimes have specific ways to fold and tie their uniform neckerchiefs, the gentleman who causally wears a bandana on his neck is expected to express his personal taste and style in the tying and wearing of that accessory.

Neckties are best when made of natural fibers: silk, linen, cotton, or wool. Leather has also been used to construct ties. Bolo ties—those “string ties” popular in western wear—are oftentimes made of braided leather or cordage stock. But by far, fine ties are made of silk—even if the traditional fabric for the most formal tie, the white bowtie, is made of cotton piqué.

Tying a Tie

Of course, no gentleman would wear a pre-tied tie. So learning how to tie the basic knots is mandatory.

Much ink and even more paper have been consumed on oftentimes futile attempts to instruct men on how to tie the various types of ties and knots via diagrams supplemented with written instructions. Typically, those attempts leave many a young man in more of a quandary after his attempt than before—even adroit, knot-tying, boy scout types. But today, because of the plethora of instructional videos posted on internet sites such as www.YouTube.com , a gentleman may easily learn at his computer what he once had to, in generations past, learn from a male member of his family or a good salesperson at a fine men’s store. Today, then, learning how to tie a bowtie or any other type of tie is more a matter of practice than privilege or patrimony. Traditional long-tie knots such as the four-in-hand, the Pratt, the Half-Windsor and Full Windsor, the Trinity, and the Eldredge are well demonstrated in online videos. And how-to videos on modern knots such as the “Novotny” and “Truelove” may also be found online.

Exquisite Ties

It is oftentimes said that a gentleman should never compromise on the quality of his shoes, his belt, or his necktie, for they are barometers of taste. A tie is a deceptively simple accessory: The making of a standard long-tie involves approximately 25 steps. A good tie should be made by hand—not by machine—using an exquisite shell (outer) fabric and an excellent lining and interlining. But the crème de la crème of long-ties is the “self-tip, seven-fold tie,” made by hand of a luxurious fabric, with the shell fabric being folded inward upon itself as the tie is being shaped, thereby eliminating the need for any interlining or lining of other fabrics. Consequently, the seven-fold tie consumes more than twice the amount of the shell fabric than other handmade ties, and, as a result, typically costs more than twice as much. But for the connoisseur, the seven-fold tie, with its special “finishes” such as “self-tips,” a “self-loop,” hand-crocheted bar tacks, and a hand-tacked label, reward its wearer tenfold. And immediately upon beholding such a tie, one senses its special attributes. As is said in the trade, a seven-fold tie possesses a superior “hand.”

Accessories to the Tie

For the purist, the only legitimate pocket square is one of white linen; and a white linen pocket square is only properly worn with a white shirt (or a shirt with significant embellishments in the color white). For the purist, only between ¼ and ½ inch of the white pocket square should be exposed, and the upper edge of the exposed portion should be parallel to the opening of the jacket pocket into which the square is placed. The objective is to create a visual and proportional balance between the portion of white shirt-cuff that extends beyond the jacket sleeve of a properly fitted jacket and the white pocket accessory. The “puff,” “points,” and “butterfly” pocket square formations that some men wear, then, even when of white linen, are regarded by the purist as “distractions.” And even more distracting are those colorful pocket squares—usually made of silk—that are color-coordinated with ties, shirts, or jackets. Wearing colorful pocket squares is a popular practice that, according to purists, should be abandoned posthaste. As far as the purist is concerned, if a man wants to wear a “flower” on his jacket, he should be bold and wear a real flower! After all, that is the precise purpose for the placement of a buttonhole—also called a “boutonniere”—on the left lapel of a jacket. Yes, a man is entitled to display panache, but it must be done with good taste. (It should also be noted that with black tie wear, the pocket square is always white to complement the shirt—never black to match or compliment the tie or the tuxedo. Likewise, with white tie wear, the pocket square is always white—to complement the shirt, the complement to the tie being coincidental). But what the purist finds especially egregious is the wearing of tie-and-pocket square sets! That, in his way of thinking, is the fashion equivalent of painting-by-numbers. Unless a man wants to look like a dodo, he should regard tie/pocket square sets as a definite no-no—according to the purist.

(The decorative pocket square’s affiliation with modern-day menswear begins in ancient times as a ceremonial, and then practical, handkerchief. It would not be until the 1950s that the pocket square would assume a purely decorative role.

The earliest records of handkerchiefs date back to 4th millennium B.C.E. Egypt, as evidenced by the red-dyed linen squares found at Nekhen (Hierakonpolis). By 2000 B.C.E., wealthy Egyptians were carrying bleached-white linen handkerchiefs, presumably for hygienic uses: A beautiful stela housed at the Kunsthistorisches Museum in Vienna, Austria shows Keti and Senet carrying handkerchiefs. Throughout the ancient and medieval worlds, handkerchiefs—plain and elaborate, perfumed and unscented—were used for everything from absorbing perspiration to wiping the hands and nose to shielding city dwellers from urban stench. But it was in the 1920s, with the rise of the two-piece suit, that men started wearing pocket squares in the left chest pocket of their jackets. And immediately, it became unthinkable for a gentleman to wear a jacket without a pocket square. Before the 1950s, when, for hygienic reasons, disposable tissue became preferred over cloth handkerchiefs, gentlemen would routinely carry two handkerchiefs: one in their pants pockets for personal use; and one in the chest pocket of their jackets in the event they needed to quickly offer a clean handkerchief to another person—especially a damoiselle in distress. Rather than reaching into a private, obscured part of the one’s garment to procure a handkerchief, a gentleman would, in plain view, simply pluck the handkerchief from his chest pocket and present it to the person in need. But in the 1950s, with the hygiene-justified preference for disposable tissue over cloth handkerchiefs, the once-practical chest handkerchief was relegated to being a purely decorative accessory. And once the pocket square no longer served its hygienic purpose, it no longer needed to be white—except for the purists. Over the years, pocket squares have waxed and waned in popularity. In the 1970s, for example, pocket squares had virtually fallen into oblivion; but since the 1980s, there has been a steady resurgence, especially of the colorful, patterned silk varieties).

The tiepin (“tie pin,” “stickpin,” “stick pin”) dates from the early 1800s and was used to secure the folds of a cravat. By the 1860s, when long-ties emerged onto the fashion scene, tiepins were used to decoratively secure the tie to the placket of the shirt, preventing the tie from blowing about in windy environs such as on board yachts and at outdoor sporting and social events. By the 1920s, however, when long-ties of very delicate silk fabrics became popular, tiepins were succeeded by tie clips (“tie bar,” “tie slide,” “tie clasp”). Tiepins, because of their design, pierce the fabric of the tie in the process of securing the tie. And with repeated use, they may cause damage to a delicate tie. A tie clip, on the other hand, clips the long-tie to the placket of the shirt without penetrating or damaging the tie in any way. Both would remain a part of menswear accessories until the end of the 1960s, when both fell out of fashion favor—partly because ties went somewhat out of fashion in the ’70s with mod fashion of the Hippie Movement and the leisure suits of the disco era. But since the beginning of the 21st century, tie clasps (but not tiepins) have made a triumphant return.

Like all masculine jewelry, less is more. A simple, understated tie clip of silver, gold, or some other precious metal is recommended. A tie clip remains one of the few items of jewelry permissible while in military dress.

Tie Maintenance

The same care used to tie a tie should be employed in the untying of it. Never should a tie be untied by pulling the knot apart. Instead, the tie should be carefully untied by “reversing the knot.” And for the man who thinks it time-efficient to slip the tied tie over his head so as to save a few minutes the next time he wears the tie, he should think twice: A tie stored with its knot intact will eventually lose its shape. Also, a tie should be allowed to “rest and breathe” two or three days between wearings so that it may air-dry (in the event it became dampened by perspiration) and regain its shape.

There are devices—called tie racks—specifically designed for storing ties by allowing them to hang freely so that any creases or wrinkles may fall out while allowing a tie to regain its shape. Many fine men’s stores and haberdasheries sell tie racks. Tie boxes are also excellent for storing ties. To store a tie in a tie box, both points of the tie should be brought together before the tie is loosely rolled and placed into a slot in the box. Alternatively, ties may be folded and laid flat in a drawer. To fold a tie, its points should be brought together before the tie is folded in half (and then in quarter if space is limited). Whichever storing method is utilized, it is advised that ties be kept away from direct light and from dust so as to preserve the color and texture of the ties. When traveling with ties, it is best to fold them in quarter. The folded tie(s) should then be placed into a plastic zip-lock bag and laid flat between other garments within the luggage. Upon arriving at the destination, the ties should be removed from the travel bag and allowed to hang in a wardrobe. It is best that a tie not be ironed since ironing, unless done professionally or by a person skilled at pressing ties, will flatten the edges of the tie—an undesirable result.

The “Masculine Obsolete”–How the Traditional Man Became Redundant to the Modern Woman

The Masculine Obsolete

In 1963, Betty Friedan, in her groundbreaking book The Feminine Mystique, put her finger on a problem that had been the source of much bewilderment for college-educated, suburban, American white women of the 1950s and early ’60s: the inadequacies of a life limited to being wife, mother, and keeper of household. The “problem,” which had theretofore been sensed but never fully identified, was like a festering, undiagnosed illness, or, as Friedan would refer to it, “the problem that has no name.” So Friedan, who herself had forfeited a promising career in psychology for the “higher calling of motherhood,” decided to give the problem a name: She called it “The Feminine Mystique.” Then she immediately went about the business of finding a solution to the problem. And she found it: Friedan encouraged women to seek self-fulfillment through careers—outside the home. And in so doing, she ignited what would become the second wave of the Feminist Movement in America. (Of course, for women such as black women in the Western World, for whom working outside the home had been a long-established necessity or presumption, the significance of the career component to the healing of the “Mystique” and to the defining of the Movement seemed hyperbolized, even if those same women embraced the overall aim of the Movement and sympathized with the overall symptoms of the “Mystique”). What Friedan perhaps did not realize, however, was that once she had christened the phenomenon “The Feminine Mystique,” she had also inseminated its male counterpart, “The Masculine Obsolete,” which, like its female predecessor, would go undetected, undiagnosed, and unnamed for years—in its case for a half of a century. Consequently, in the 21st century, men all over the Western World face their own dirty, little, no-name problem: the inadequacies of a life limited to being husband, progenitor, and breadwinner.

Males, having long enjoyed the self-proclaimed status of the “superior sex” —a distinction that until the Feminist Movement had been conceded by many a traditional female—are quietly, but increasingly, feeling inferior to modern women, who not only are equaling and surpassing men in the once-male-dominated hallowed halls of academia and in the mahogany-paneled boardrooms of the corporate world, but are also typically more skilled on the domestic front than their male counterparts, even if the typical 21st -century woman is inept, vis a vis women of the previous century, at the traditional domestic arts of sewing and cooking, for example. (The fact is that very few modern women can thread a sewing machine, let alone make or mend a garment; and perhaps even fewer could bake a cake from scratch or make homemade bread if their lives depended on it. But as compared to modern men, women remain head and shoulders above men in matters domestic). For many Feminist Movement-impacted men, the lyrics of the 1946 Irvin Berlin song, “Anything You Can Do,” the spirited duet between a male singer and a female singer, each proclaiming to be able to outdo the other in a series of increasingly complex tasks, written for the Broadway musical Annie Get Your Gun, ring true—for women—especially the song’s most famous line, “Anything you can do I can do better.” In effect, then, the post-Friedan woman is both “woman” and “man,” while the present-day man is merely “man,” much to the chagrin of men and the frustration of modern women (who claim they want men to be their equals, not their superiors or their inferiors). That delicate imbalance of the sexes, which for thousands of years tipped in the favor of men, has now shifted in favor of modern women. The sixty-four-thousand-dollar question, then, is: So what’s in it for women? Or, more pointedly, why are 21st -century women cohabiting with and marrying men if women can “go it alone”? What’s a boy to do—if he wants to keep a modern woman in his life? And with the increasing legalization and social acceptance of gay marriage, how can two traditionally raised gay men keep house together? Are such households destined to suffer from a double dosage of “The Masculine Obsolete”? Will such households be examples of the blind leading the blind? Or will same-sex couples—gay and lesbian—serve as an example of modern marriage, both different-sex and same-sex, where roles and responsibilities are determined not by gender and tradition, but by capacity and interest?

Traditional marriage and male-female cohabitation presupposes a symbiotic, interdependent relationship between the male breadwinner and the female homemaker. Each needs the other, and each wants the other. But while modern women, as a result of the Feminist Movement, have become proficient at being self-sufficient, many men, who, unwittingly, saw no need for a “Masculinist Movement,” have remained dependent upon women for many of their most basic of needs—from cooking the food that nourishes them, to cleaning the bathrooms where they are supposed to cleanse themselves, to washing and ironing the clothes they wear to work, to making the beds in which they sleep and have sex. And, ironically, modern women are partly to blame for the domestic ineptitude of men, for women continue to play a pivotal role in the raising of antiquated sons: In general, men are not raised by their mothers and grandmothers to become the well-rounded men that modern women now desire and require. Instead, modern women continue to allow their sons to wallow in “The Masculine Obsolete,” while those same women raise their daughters to triumph over the “The Feminine Mystique,” becoming independent women, capable of thriving in professional and domestic arenas (even if pre- and anti-Feminism stalwarts insist that today’s women are a far cry from the ladies of yesteryear). When the chanteuse-protagonist of the 1970s’ television commercial for Enjoli perfume would proudly belt out during prime-time television, “I can bring home the bacon; fry it up in a pan… ‘Cause I’m a woman….,” men should have taken notice that something in society was simmering—especially since, at the tail end of the ad, a man’s voice, presumably that of the heroine’s husband, could be heard in the background uttering, “Tonight I’m gonna cook for the kids.” But men did not take heed. Instead, they continued being “just men” rather than striving, like their feminist counterparts, to become self-contained, woman-man entities. So arguably, men are even more responsible for the perpetuation of the “Masculine Obsolete”: They had ample warning and could have stopped it in its tracks from back in the 1970s. Fathers knew then and know now, first-hand, the justified resentment expressed by professional women who have to return home from work each day only to commence another full-time job of caring for their children and their professional husbands.

Despite the tendency of “mystiques” to be mystifying, a cogent argument can be made that “The Masculine Obsolete” is a male problem and therefore should be solved by men—the way women like Friedan had to take charge in creating solutions to the “The Feminine Mystique.” After all, had women left the unraveling of “the problem that has no name” to men, suburban American women would still be standing in front of their state-of-the-art kitchen appliances—albeit in Manolo Blahnik and Jimmy Choo shoes—meanwhile self-medicating with Valium-laced cocktails.

But why did women, after having solved “The Feminine Mystique” on their own, and knowing that men were not in possession of the skill sets to solve their own “no-name” problem, not intervene and at least raise their sons—even if not their husbands—to be modern men? Why did women leave men to have to reinvent that wheel? Why did mothers allow fathers to throw their sons “under the bus” (or, at best, abandon them “in front of” the oncoming bus) of social change? Was it payback for thousands of years of male dominance? Were Feminist-mothers longing for some of the vestiges of a bygone era where men were separately and distinctly “men” and women were separately and distinctly “women,” such that those mothers would persist in raising their sons to be traditionalists in the face of social upheaval? How much of male-female behavior is attributable to biology and how much to socialization? How could parents have raised their daughters to be feminist ladies and sons to be modern gentlemen without redefining—to the point of distortion—the terms “lady” and “gentleman”? Are the terms “feminist lady” and “modern gentleman” oxymora?

Concomitant with women’s triumph over the“The Feminine Mystique” is men’s succumbing to “The Masculine Obsolete.” Over the decades, there have been various attempts at bringing balance to the two social phenomena, but only with marginal success. After two generations of two-income families and latch-key children, many people would agree that the most effective way to raise children and run a household is for at least one person to be a stay-at-home spouse—even if the overall disposable income of the family is compromised as a result. Children, until they leave the home, need supervision. And the quality of life of a professional is enhanced if he or she is able to return to a home where he or she can relax and unwind instead of having to deal with the additional stresses of maintaining a household. And since hiring housekeepers and live-in childcare is beyond the economic reach of most families, one of the spouses must typically fill those roles. So in an attempt to accommodate the modern woman’s desire for careers outside the home, there have been various experiments with role-reversal where men—especially in cases where their wives have greater income-capacity—become stay-at-home husbands, or, as society refers to them in the cases where the couple is raising children, “Mr. Moms.” But while men, theoretically, are as capable as women at raising children and keeping house, the “Mr. Mom Model” overlooks certain social inconsistencies that are yet to be fully reconciled: Men, because of socialization, still believe that they should be the primary breadwinner in the household; and women—even bra-burning, Friedan-quoting modern ones—because of socialization, resent having to be the primary breadwinner, even if they take personal pride and feel a sense of accomplishment in the fact that they are. Consequently, many “Mr. Mom” men feel undervalued by society, themselves, and their wives (and later by their adolescent children); and many career women harbor resentment and sentiments of disrespect for their stay-at-home husbands—the way professional men have traditionally undervalued the contributions of the traditional housewife. And while some men believe that their wives should have careers and contribute towards the finances of the household, others firmly believe that the most masculine expression of manliness is the man who can support his family without the financial assistance of his wife—so much so that some such men feel justified, if not authorized, to avail themselves of certain “extramarital privileges” as a reward for being exceptional providers for their families. Meanwhile, many of even the staunchest professional feminists—lawyers, doctors, entrepreneurs—would have no qualms giving up their hard-earned careers to be the wealthy wives of wealthy men, passing days as “ladies who lunch,” volunteering with charitable organizations, serving on museum boards, being “the hostess with the mostest,” and living lives of leisure, travel, and shopping. Except for the women who have truly found their life’s calling and are therefore viscerally compelled to pursue those careers, many educated, modern women would gladly abandon professions if they were guaranteed that their husbands could provide them with all their needs and wants. After all, for many women, the primary purpose for their careers is to provide security for themselves before marriage, in the event they never marry, or upon divorce. Those same women, however, if they were the sole source of the family’s financial wherewithal, would harbor resentment and disrespect towards their “kept husbands.” And when both the man and woman of the house work, both tend to assume that the man should generate the higher income. So the stay-at-home-husband model seems to work best when the husband has a home-based business that generates at least as much income as that of his professional wife. In essence, then, modern women do not regard it as fundamentally wrong to be provided for by a man. But those same modern women regard it as fundamentally wrong to provide for a man. They regard the “kept man” as a corruption of nature, as an encroachment upon their femininity. Then to add fuel to the fire, some women—even professional ones—are “domestic territorialists,” oftentimes envying or resenting a man who is better at child care, cooking, and cleaning, for example, than they are—the way many men oftentimes resent and envy women who surpass them in the workplace. “Domestic territorialists” are infamous for bursting through the front doors of their homes after a long day at work in the corporate world and immediately beginning—like lionesses scent-marking their territory—“rearranging or tweaking or tidying up” the domestic accomplishments of their stay-at-home husbands.

So, in essence, men, women, and society have a lot to reconcile in the area of the parity of the sexes. As one social pundit puts it, “There won’t be true equality of the sexes until middle-aged, overweight women can walk up the beach, topless, and think they are God’s gift to men.”

Sexism, chauvinism, and a host of other “isms” convinced men and women that they were more dissimilar than alike. And it was not until the Feminist Movement that the exact opposite was proven true—that men and women are exactly alike, except in a few areas that are irrelevant under most circumstances. But despite the overall equality of the sexes, the fact remains that men are men for a reason, and women are women for a reason. Nature, in its infinite wisdom, made it so. So when Feminism, with its broad, sweeping broom, was “cleaning house,” discarding all possible distinctions between the sexes, some people—even some feminists—longed for some of the old distinctions to be preserved. For example, even the staunchest feminist regards it as infinitely charming when a gentleman rises as she enters a room or approaches his table. And even a socially unschooled man would refuse to allow a woman to hold open a door so that he may enter or exit a room before she does. (See chapter, “Out and About—Manners in Public Places”). But overall, The Feminist Movement, with its demands for equal treatment of women, has served to also relieve men of many of their social obligations to women, the result being generations of men and women neither of which knowing how to extend or receive the time-honored social graces. But at the end of the day, despite the Movement, the onus of most manners still falls on men: It is men who must tip and remove their hats; men who must pull out chairs and open doors; men who must walk curbside…. Similarly, in a world populated with modern women, the onus is on men to become modern men.

Demystifying “The Masculine Obsolete”

Regardless of the shouldah-couldahs and the blame-game, the fact remains that as of the first decades of the 21st century, “The Masculine Obsolete” remains unsolved, wreaking domestic havoc on men, with collateral effects on women. So today, any book on men’s comportment must demystify the “Obsolete,” for at the end of the day, a gentleman of today must appeal to the lady of today. He must be like a peacock with a full tail; a bull with pointy horns; a rooster with a melodious crow. He must be a man with the domestic skills of a woman. Today, in order for a man to be regarded as “marriage material” by the modern woman, he must not only be educated and gainfully employed or employable, he also must be able to cook and clean and mend and child-care and launder and organize play-dates and sleep-overs and schedule pick-ups and drop-offs. He must be able to set a formal table and pack a picnic basket, arrange flowers in a vase, wrap Christmas presents, hand-wash and drip-dry his fine garments, and braid his preteen daughter’s hair. The modern man must be able to replace a missing button, bake and decorate a birthday cake, and make a Halloween costume from scratch. He must pre-wash his dishes before placing them into the dishwasher—in an orderly manner; he must replace the cap onto the tube of toothpaste after brushing his teeth; and, for the one billionth time, he must remember to raise the toilet seat before urinating (and to lower it after use!).

Over the years, there have been some attempts to identify the well-adjusted modern man, the term “metrosexual” perhaps being the most recognizable. But that label, on its face, neither identifies nor addresses the essence of the solution to “The Masculine Obsolete.” Instead, the term “metrosexual” tends to conjure up images of a 30-something, sophisticated, urban male with a fastidious appreciation for the finer things in life, rather than images of a man willing to stand shoulder to shoulder with his woman in all matters domestic—from the careers that finance the home to the day-to-day chores that keep the home functioning. (Besides, for a lot of men—and women—the term “metrosexual” is really a euphemism for “gay,” not “modern”).

The 21st-century gentleman must be acutely aware of the fact that, more and more, women are finding marriage to be an unnecessary—even if desired—institution. Long gone are the days when women had to marry for survival into adulthood. Not even the bearing of children within the context of matrimony is of paramount importance to many modern women. Yes, some women (even some feminists)—especially young ones wishing to marry for the first time—still hope for “the perfect husband”: the princely, wealthy, handsome gentleman. But for the majority of women for whom such a reality exists only in fairy tales, they simply want a husband who can carry his own weight outside and inside the home.

For many 21st -century men, however, the idea of marriage or cohabitation with the opposite sex is far more practical and much less romantic: Many men need a woman in order to maintain the level of civility and domesticity to which they have become accustomed while living at home with their mothers. Left to their own devices, many men would revert to an almost feral state—within the context of society: bed sheets would go weeks unchanged; bathrooms would go years uncleaned; dirty dishes would remain piled up in the kitchen sink until a female friend offers to wash them; leftovers would grow mold in the refrigerator; worn underwear would get turned inside-out and worn again; last-year’s pizza box—with half-eaten slices in it—would remain under the bed; locating an ironing board and its accompanying iron would become the equivalent of looking for weapons of mass destruction.

So since parents, for the most part, continue to fail miserably at raising their sons to be domestically inclined—whether, in the case of mothers, it is intentionally done in order to inflict the pain they had to endure onto their daughters-in-law, or, in the case of fathers, on account of some irrational fear that domesticating their sons will render them gay or effeminate—men must take it upon themselves to educate themselves in the ways of keeping house. Men must cure men of “The Masculine Obsolete” by making men equally capable as women in the household, for the future of traditional marriage—the cornerstone of human society—depends upon domestically skilled men. Continued domestic ineptitude will make men irrelevant to the modern woman, or, at best, relegate their relevance to that of sexual objects for heterosexual and bi-sexual women. But marriage aside, while the average American male in 1950 was married by the age of 24, twenty-first-century men are increasingly getting married in their early 30s, thereby making it all the more practical for men to be able to take care of themselves.

Given the dismal failure of parents to educate their sons in the art of homemaking, perhaps the most efficient way to eliminate domestic impotence in men would be to mandate home economics courses for schoolboys and to offer and encourage such courses on the university level. “Home Ec. 101—for Jocks” could very well become a popular elective on college campuses across America. (Traditionally, as a supplement to basic “home training” by parents, young women, until the early 1970s, were expected to attend “finishing school” or “charm school” as well as take home economics classes as part of the junior high and high school curricula. And home economics was a popular college major amongst young women. On the junior high and high school levels, young men were trained in “wood shop”and/or “machine shop.” A similar approach could be instituted in the 21st century, except that young men and young women would both take home-ec and wood shop/machine shop courses, the aim being to make men and women domestically competent and complementary. In addition to courses, men should also be encouraged—the way they are with sports, for example—to routinely view home improvement programs so as to hone their prowess in the home). Until such policies and practices are implemented, “The Masculine Obsolete” will likely persist; modern women will continue to regard men as incompatible for marriage or long-term cohabitation; and the institution of marriage, the cornerstone of human society, will decline even further.

The Rich History of Penny Loafers–one of the all-time classic shoes

Loafers

Around 1930, Norwegian shoemaker Nils Gregoriusson Tveranger (1874-1953) introduced a new slip-on design, which he called the “Aurland moccasin.” (The shoe would later come to be called the “Aurland shoe”).

As a boy of 13, Tveranger traveled to North America, where he learned the craft of shoemaking. At age 20, he returned to Norway, apparently influenced by what he had experienced in the New World because his “Aurland moccasin” resembles the moccasins typically worn by the Iroquois (as well as the moccasin-like shoe traditionally worn by the local people of Aurland). Shortly thereafter, Norwegians began exporting the shoe to the rest of Europe. Americans visiting Europe took a liking to the shoe, so much so (perhaps because of their stylistic similarities to the Native American moccasin) that Esquire magazine featured an article on the by-then-popular shoe. The article was visually enhanced by photographs of Norwegian farmers wearing the shoe in cattle loafing sheds, and the rest, as it is said, is history…. In the 1930s, the Spaulding family of New Hampshire, inspired by the Norwegian shoe, began manufacturing a similar moccasin-like shoe, which they called “loafers.” The appellation would eventually become a generic term used to describe any moccasin-like slip-on shoe.

In 1934, G. H. Bass of Wilton, Maine began making his version of the “loafer” which he called “Weejuns,” a corrupted truncation of “Norwegians.” One of the distinguishing features of Bass’ design was a strip of leather stitched across the saddle of the shoe, the strip featuring a stylized crescent-shaped cutout. By the 1950s, “Weejuns” had achieved ubiquity amongst students, who oftentimes would, for safekeeping their “candy money,” slip a coin—usually a penny, enough in those days to purchase one or two candies—into the crescent-shaped cutout. The shoes then came to be known as “penny loafers,” a moniker that endures to this day.

From Native American moccasin to Northern European farmers’ shoe to Southern European summer shoe to classic collegiate footwear, the loafer—especially in America—has evolved into one of the all-time great fashion classics. Today, it would be hard-pressed to find a manufacturer of shoes that does not have some version of the loafer in its collection. Besides being unisex, the shoe has acquired general acceptability: Men have been known to wear patent leather loafers with their tuxedos; lawyers wear calf skin loafers with their fine suits to court; university professors still regard them as essential to the academic wardrobe; and urban dwellers consider the loafer—in all its variations—the ultimate “city shoe.”

One of the greatest resurgences of the loafer occurred in the 1980s when 1950s-inspired secondary school and collegiate fashion called “preppy”(the affectionate of [college] preparatory) became all the rage in the United States and then beyond. By the end of the ’80s, the loafer had come to symbolize a “nonchalance towards privilege.” It was not (and still is not) uncommon for a fashionable young man to wear a blazer with faded jeans and loafers—without socks, of course.

The History of Umbrellas

The Umbrella

The word “umbrella” derives from the Latin word “umbra,” meaning “shade.” And “parasol” derives from the Spanish/French “para,” ” meaning “stop” and “sol,” which means “sun.” The archaeological record establishes the presence of the parasol and umbrella in many of the great cultures of the ancient world: in Egypt; Assyria; Ethiopia; India; Persia; Greece; Rome; the Mali, Ghana, and Songhai empires of West Africa; and the Aztecs of Mexico, for example. But nowhere was the use of umbrellas and parasols more prominent than in ancient China, where it appears by the 11th century B.C.E.

Until the 18th century, parasols and umbrellas were used for protection from the sun, the difference being that parasols were carried over the person (by an attendant), while umbrellas were carried by the person (him/herself). For a brief time during the Roman era, people used umbrellas for protection from the rain, but the idea never garnered popular support. And in ancient Greece, it became popular for ladies to have parasols held over their heads at feasts in honor of Pallas Athena. As a result, the parasol came to be associated with women—especially those of the privileged classes.

The popularity of umbrellas in Europe derives from the 12th century when Pope Alexander III presented the Doge of Venice with a parasol to be carried over his head, (a custom that would endure until Napoleon Bonaparte brought an end to the Venetian Republic in 1797). By the 15th century, the umbrella had become a fashion accessory for shielding people from the sun.

According to the Oakthrift Corporation article titled “The History of Umbrellas—10 interesting facts you never knew,” umbrellas became popular in France amongst ladies in the 17th century, and by the 18th century, use of the accessory had spread across Europe. But umbrellas and parasols did not appear in England. Scotland, and Ireland until Portugal’s Catherine of Braganza married Charles II in 1662 and introduced the accessory to his realm. [It is believed that the superstition forbidding the opening of an umbrella indoors derives from the untimely death of Prince Rupert shortly after he had been presented with a gift of two umbrellas from the King of Batam in 1682].

Between 1685 and 1705, there were efforts, led by the English, perhaps on account of their notoriously rainy weather, to waterproof umbrellas. Unlike in the Roman era, the concept of using umbrellas for protection against the rain caught on in England. And the waterproofing of umbrellas gave rise in the mid-18th century to the distinction between the parasol as an accessory to shade one from the sun, and the umbrella for protection from the rain.

The earliest written record of a collapsible umbrella with bendable joints dates to 21 C.E. China, but the archaeological record suggests that such devices may have been used as early as the 6th century B.C.E. in China. In 1786, John Beale registered the first umbrella patent. His design was of a circular, coned canopy supported by ribs connected to a central shaft. But it was Samuel Fox’s U-shaped steel ribs construction that revolutionized umbrella construction. Fox’s design is still used today.

Perhaps because of the precedent set with the Pallas Athena feasts during antiquity, the umbrella would remain an accessory primarily used by women—until Jonas Hanway popularized its use by English gentlemen in the middle of the 18th century, so much so that umbrellas would, for a time, come to be called “Hanways.” But even so, there was resistance to umbrellas: Hackney coachmen regarded the accessory as competition; and some members of the privileged classes regarded being seen in public with an umbrella as a tacit admission of one’s inability to own a carriage. By the 19th century, perhaps in a reactionary ostentatious display of wealth coupled with a bona-fide need for portable protection from the rain, umbrellas had become fancy—with handles of precious metals studded with gemstones, for example. But by 1852 the umbrella had become a necessity, so much so that whalebone was replaced by mass-produced steel ribs as the structural foundation for umbrellas.

Tanned skin became fashionable around the 1930s, and with it came the beginning of the end of the parasol. The umbrella, however, being a portable, practical shield from rain, remained popular.

In 1928, Hans Haupt’s pocket umbrellas became available on the market. In the 1950s, nylon replaced oiled cotton canvas as the fabric of choice for umbrella canopies. And in 1969, Totes, Inc., obtained a patent for the first functional folding umbrella. Today, in the United States alone, over 33 million umbrellas are sold per year.

The English have, by necessity, mastered the art of making umbrellas: English umbrellas are considered the best in the world. And of all English umbrella manufacturers, Swaine, Adeney, Brigg & Sons is considered the absolute best. And the best “Brigg” umbrellas are those made of triple-woven silk (which expands a little when wet), thereby making the fabric absolutely waterproof.

If a gentleman intends to invest in a good umbrella, he would be wisest to select one in the color black, for wherever umbrellas are appropriate, a black one is always most appropriate.

20/40/60 Marriage–Redefining the “Successful Marriage”

It is a well-documented fact that in free societies where nubile persons choose their marriage partners, half of all marriages end in divorce. And it is fair to postulate that a significant percentage of the marriages that endure are not happy, satisfying unions. The institution of marriage in such societies, then—at least in its present expression—is significantly flawed, for a success rate of less than 50% would qualify most other things as in need of “improvement” or “major overhaul.”

Many people would agree that the concept of marriage—of two people officially and legally joining forces and resources to build a life together—is a good thing. After all, “life is hard,” so why “go it alone”? Besides, everyone needs someone to drop him off at the airport or pick him up off the bathroom floor if he falls and injures himself. But one of the fundamental flaws of marriage as presently defined is that it must endure for life in order to be regarded as “successful.” And it is that premise—codified in the “until death do us part” clause commonly found in religion-based marriage ceremonies—that generates much of the grief associated with the dissolution of marriages. Another fundamental flaw of marriage is the notion that spouses are self-contained, self-sufficient, autonomous units, capable of providing for all the needs and wants—emotional, sexual, financial, social—of each other for life. But that is simply too tall an order for many people.

While there is something sublime about two people meeting and falling in-love in their 20s, getting married, building a life together, raising children and then grandchildren, then walking off, hand in hand, into the sunset of their lives, that is not always or even typically the case. And while such a scenario may be regarded as the ideal expression of marriage, it should not, in a free society, be regarded as the institution’s only valid expression. What reasonable person would insist that a Rolls-Royce is the only valid automobile, or that the Gucci loafer is the only legitimate loafer, or that filet mignon is the only cut of beef worth eating, or that the only ice cream worthy of that delicious appellation is Häagen-Daz? If graduating summa cum laude were the only acknowledged way to graduate from college, very few people would possess acknowledged college degrees. In other words, in many facets of life, “good enough” is good enough. So why not apply that same standard when assessing the success or failure of a marriage?

In modern societies, where individual freedoms and pursuits are regarded as birthright, marriages that endure 10 years and beyond are increasingly being regarded as “successful” marriages. After all, in free, 21st-century societies, where people are expected to move in search of opportunity, where women have the means for independence, and where personal happiness is paramount, it is not unlikely that two people who are compatible today will become incompatible a decade later—at the fault of neither person. The fact is that people change—fundamentally—decade by decade. And a gentleman’s outlook on life in his 20s could be very different to that in his 30s, with both outlooks, even if diametrically opposed, being appropriate for their corresponding decade of life. Likewise, society also changes in fundamental ways: There was a time, for example, when the ideal career model was to secure employment upon leaving college or high school, work for the same enterprise for 40 or even 50 years, then retire (and hope to live long enough to enjoy retirement). In the 21st century, however, that outlook is almost inconceivable—for both employer and employee. Similarly, marriage-for-life may have been the only ideal model fifty years ago, when the genders were interdependent; where people were more religious and were married in religious ceremonies that incorporated the promise of marriage for life; and where, because of the preceding, there was a stigma attached to divorce. But by the 1970s, with the instituting of no-fault divorce, the rekindling of the Women’s Movement and the Sexual Revolutions, the decline of religion (even if not spirituality), etc., society’s outlook on what constitutes a successful marriage began to be more broadly interpreted and defined.

Today, a significant percentage of nubile persons in free societies marry more than once—and for good and different reasons. It is not uncommon for people to finally “get it right” only after their second or third attempt. So it would perhaps behoove society to own up to the fact of multiple marriages over the course of the typical lifetime and create an outlook on marriage that accommodates multiple marriages.

Twenty-Forty-Sixty Marriage (or Stability/ Sex/ Compatibility Marriage)

Twenty-Forty-Sixty Marriage offers the social framework for the average person to marry three times over the course of his lifetime—having a “successful” marriage each time by accomplishing the realistic goals earmarked for of each of the three tiers of marriage: marriage for stability in one’s 20s; marriage for sex in one’s 40s; and marriage for companionship in one’s 60s.

Twenties Marriage (Stability-Marriage)

One of the age-old problems with child-bearing is that marriage is the condition precedent to the “legitimacy” of children. But in a liberal, tolerant society where “family” is being redefined to include other models, and where same-sex marriage and the use of marijuana for medicinal purposes are now accepted, why should marriage remain the prerequisite for legitimacy? Why couldn’t legitimacy, for example, be accomplished via a contractual agreement between consenting parents who legally acknowledge paternity/maternity and agree to share equally in the responsibility of the raising of those children? Would women feel less compelled to marry in their 20s so as to bear their children within the context of wedlock?

But 21st-century society, despite all its social advancements and inclinations towards tolerance, is not yet at that juncture. So for the time being, women continue to marry in their 20s—not necessarily because they want to be married, but because their 20s is the best decade for bearing children, even if not for raising them. Then to further complicate matters is the fact that their male contemporaries—young men in their 20s—typically do not have the financial or emotional wherewithal to support a child-bearing wife and the couple’s offspring. Consequently, many well-intentioned marriages between loving young people end in bad divorce. And it is bad divorce—not divorce in and of itself—that is the culprit. The days of “marriage come hell or high water” are practically over. Today, thankfully, people recognize that the right to marry is concomitant with the right to divorce, and that divorce, under certain circumstances, can be a very good thing.

Under the Twenty-Forty-Sixty Marriage model, a woman in her twenties would marry a man in his forties who is capable of providing a stable environment for the bearing and caring of children. The symbiotic trade-off is stability for the younger woman, sensuality for the older man. While for a woman in her 20s, marriage to a man twenty years her senior may not be as sexually rewarding as a marriage to man in his 20s, marriage to a financially and emotionally established older man is arguably the relationship that is in the best interest of child-rearing. (For many couples comprised of two young adults in their twenties, the stresses of child-rearing, career, financial instability, compromised individuality, etc., are overwhelming).

The Twenty-Forty-Sixty Marriage model further postulates that men in their 20s should also marry for stability—to women in their 40s (some of whom will have obtained their stability whilst married to men 20 years older). A young gentleman’s marriage to a a more mature, stable woman—without the burden of child-bearing/rearing (since, theoretically, his more mature wife would have already born her children in her 20s with an older husband)—could serve to be the perfect environment for a young man to obtain upper-level education, build his career, and mature emotionally. The benefit to the more mature wife is sex with a young, virile husband, thereby compensating for the sex she may have missed during her child-bearing twenties while married to a man in his forties.

Forties Marriage (Sex-Marriage)

Everyone has a right to experience sexual pleasure in his lifetime; it is his birthright. The sexual peak of the human animal—when his body, mind, and soul are most at equilibrium—occurs between the ages of 24 and 48. While there is a sublime beauty to young love-making, where two inexperienced lovers uncover the art of sex together through trial and error, there is wisdom in the notion that sex is best enjoyed when at least one partner is sexually experienced and can guide the other. In Forties Marriage, the older man, who, while in his 20s would have learned the art of pleasing a woman while he was under the tutelage of his more mature wife, in his 40s marries a woman in her 20s. He provides a stable environment for himself, her, and the children they will bear together, while his young wife invigorates his sex life. Similarly, a woman in her 40s, having born her children with an older man and having learned the art of pleasing a man during her previous marriage with her first, older husband, in her 40s will bestow that knowledge upon her young husband.

When Marriage for Stability and Marriage for Sex are combined, then, they allow for everyone—women and men—to experience the comforts of a stable environment in which to bear and rear children and to establish him/herself professionally, emotionally, and financially. Everyone gets a chance to do everything—well.

The Twenty-Forty-Sixty model gives rise to the question: How does divorce—even “good” divorce—impact children? Children are astoundingly resilient, malleable, and accommodating to change—much more so than adults. When parents uproot and move in pursuit of opportunity, for example, children move along with them and readjust. For children, “normal” is whatever has been presented to them as “normal”—even if, objectively, it is “abnormal.” Unlike children of the 1950s, for whom divorce was abnormal and oftentimes traumatic, 21st-century children regard divorce as “the new normal.” Rarely, in the modern family is there a family without numerous examples of divorce. So children today understand and are comfortable with the concepts of shared custody, alternating holidays, multiple homes, etc. And children today feel that they and their parents are entitled to individual happiness. Amicable (“good”) divorce, where parents, in the best interest of the children, remain civil or friendly during and after divorce, tends to be far less traumatic for children and is even regarded by a growing number of them as the preferred alternative to a contentious or dysfunctional marriage. Under the Twenty-Forty-Sixty model, “transitional divorce” is factored into the tiers of marriage from inception, thereby minimizing the occurrence of bad divorce while increasing the likelihood of divorce that is in the best interest of all parties involved. Under the Twenty-Forty-Sixty construct, transitional divorce is part and parcel to marriage. It is the norm. It gracefully (even if not seamlessly) allows for divorce to occur in Stability-Marriage and Sex-Marriage when the objectives of those marriages have been achieved. Under the Twenty-Forty-Sixty construct, transitional divorce does not sever relationships on the emotional, spiritual level; only on a legal one. And transitional divorce is, of course, not mandatory: If both parties a couple agree to transition together into the next tier of marriage, they are able to do so. Additionally, the Twenty-Forty-Sixty model in no way infringes upon the traditions or moral fabric of marriage for life. Proponents of marriage for life are free to pursue their traditional ideals.

Sixties Marriage (Compatibility-Marriage)

A person in his sixties is in a different place—physically, emotionally, spiritually, and socially—than a person in his forties: Their priorities are fundamentally different (partly because people in their sixties are acutely aware that their lives are beyond half-lived). Compatibility-Marriage allows for people in their sixties—after having raised their children, having had fulfilling sex lives, having obtained their career goals, and having made their marks on the world—to align with each other for the sheer pleasure of companionship, with sex being, at best, incidental to or a perquisite of the union. It is not uncommon in modern, transient societies for people in their sixties to establish new homesteads for their retirement years, oftentimes geographically distancing themselves from their children and grandchildren in the process. Conversely, it is not uncommon for people in their sixties to be left behind by children seeking opportunities and establishing their own branches of family in other cities and countries. So many people in their sixties desire to be officially and legally assured of companionship that is likely to endure for the remainder of their lives: marriage. But unlike their previous marriages, the primary motivation for Compatibility-Marriage is friendship. Thus, in societies where same-sex marriage is legal, the pool of potential Compatibility-Marriage spouses is twice as large. Because of the nature of Compatibility-Marriage, the gender or sexual orientation of one’s spouse becomes less material. A platonic relationship is the foundation of Compatibility-Marriage. And its objective is to endure for the remainder of life.

But sex may still play a meaningful role in Compatibility-Marriage. Typically, by age 60, many people will have conquered or come to terms with their “hang-ups” about sex, so sex (to the extent that it exists) within the context of Compatibility-Marriage, can oftentimes be exciting and liberating. Because sex is not the primary motivation for Compatibility-Marriage, partners tend to be less sexually possessive or exclusive with their spouses. Compatibility-Marriage, therefore, is oftentimes open to non-committal extramarital sex, ménage à trois, hired sex (in jurisdictions where it is legal, of course), etc. After all, people in their 60s are grown people and should be mature enough to handle the intricacies, subtleties, and complexities of sex and sexual relationships. People in their sixties are also acutely aware that they are in the final phase of any meaningful sex life. So whatever sex they engage in during those years should be fulfilling, exciting, interesting, and engaged in for the purpose of strengthening their Compatibility-Marriage.

[  The Twenty-Forty-Sixty model also applies to same-sex marriage, except that the child-bearing component of Stability-Marriage (Twenties Marriage) does not apply to same-sex male marriage (though it does apply to same-sex female marriage). ]

The History of the Fork

The Fork

The fork was around long before it staked out its place on the dining table. The Egyptians used large forks for cooking; and the word “fork” derives from Latin “furca,” meaning “pitchfork.” As a dining utensil, however, the fork is believed to have originated in the Eastern Roman Empire, also known as the Byzantine Empire, where it was in common use by the 4th century. By the 10th century, the fork had become popular in Turkey and the Middle East, spreading thereafter to southern Europe by the second millennium.

The earliest forks had only two widely spaced tines, which were straight, not curved slightly upward as they are today. And their handles tended to be about four inches long and thin, with a circumference about half that of a modern-day drinking straw.

To a large extent, the popularity of forks in the West came literally and figuratively at the hands of two Byzantine princesses who married into Western aristocracy: Theophano, who married Holy Roman Emperor (967-983) and Germany’s King Otto II in 972 C.E.; and Maria Argyropoulaina, who wed the son of the Doge of Venice in 1004. By the end of the 11th century, the table fork had become known in Italy amongst the wealthier classes. By the 14th century, the fork was clearly on its way towards being an accepted dining utensil in Italy. And its widespread acceptance in Italy remained steady, eventually becoming a typical household utensil by the 16th century, some 500 years after its introduction. In 1533, at age 14, Catherine de’ Medici and her entourage introduced the fork to the French when she left Italy for France to marry the future King Henry II. During the Italian Renaissance, each guest would arrive with his own fork and spoon in a decorative box called a “cadena,” and Catherine and her court took that custom along with them to France. It was not uncommon for royals and nobles to have forks made of solid gold or silver, though iron and pewter, for example, were used for the forks of the less privileged.

By the 16th century, the fork had become a part of Italian etiquette, and Spain, Portugal, and France followed suit (though it is widely believed that the Infanta Beatrice of Portugal introduced the fork to her country in the middle of the 15th century). Thomas Coryate is credited with introducing forks to England in 1608 after seeing them in use in Italy during his travels; the initial English reaction was consistent with that of most of Europe—that forks were effeminate and pretentious. In much of northern Europe especially, where most eating was done with the hand or with the aid of a spoon when necessary, the fork was viewed as a decadent, Italian affectation. By the 18th century, however, most of Europe used the fork.

The fork design popular today, with its four, slightly curved tines, was developed in France at the end of the 17th century and in Germany in the middle of the 18th century. It was not until the 19th century—almost 1500 years after it was first popularized in Byzantium—that the fork would become a household item in North America.