Brian Feeney
1

Getting Those Hyperbolic Design Talk Blues

Mattan Griffel:

Very quickly, society is becoming divided into two groups: those that understand how to code and therefore manipulate the very structure of the world around them, and those that don’t – those whose lives are being designed and directed by those that do know how to code.

The surface impression I get from statements like these is that designers/developers are often too self-important. For the last ten years, I've been reading "serious" designers like Michael Beirut and Steven Heller and other Design Observer type folk, and the assumption that so many of them make is that design has as much power as money or political status. I don't think it does. But I do understand the idea.

Yes, one of the functions of design (especially product design) is to control the behavior of the user, but it does so passively. Literally passively.

I understand what Mattan is saying. As technology becomes more and more virtual, i.e. Web based, those who can code have more control. Just like city planners have control over city dwellers, and interior decorators have control over people in their own living rooms. But it is incredibly hyperbolic to say that entire lives are "being designed and directed" by someone typing letters into text editors.

I think it comes down to the fact that we don't yet have a very clear way of talking about how design is affecting our lives in 2012. Part of the conversation around the New Aesthetic is questioning whether or not it actually is a thing (I'm not yet convinced.) because the idea is so vague. Computer generated data and art can look really strange or it might look incredibly analogue. Skueomorphism is a thing because transitioning from landline telephones and paper address books into digital versions is difficult for many people. By the time we get to 2020, we will have integrated a lot more internet-connected products into our daily lives. Slowly but surely we will adapt. It will all become normal. Or the rapid pace of change will start to feel normal.

Do I think it's important people learn to code? Designers should learn how to code. CEOs of technology start ups should know something about coding. But everyday "normal people" (as Marco would say) don't need to know any more about coding than they do about farming. They get their apps the same way they've been getting their hamburger for decades.

I could probably write an entire article on the hyperbolic, self-inflated talk of some design writing, and I might, but what I really want to say right now is: Chill. Technology is pretty awesome at the moment, but we're still living in a physical world. Let's not lose sight of that.

April 15, 2012

articles


Innovate with RSS

Dave Winer thinks it’s time to give RSS another shot:

If I were Google I’d fight Facebook with RSS. Re-integrate RSS with Chrome. Make subscription something browsers support. Provide an open, clonable, simple web service that returns a subscription list for any user with permission of course so a million ideas for RSS aggregators can bloom. A browser button that says Subscribe To This. Surround Facebook with open innovation from small developers everywhere.

I agree. I really appreciate the functionality of RSS, of being able to curate my own reading list, and I’m certain most people would, too, if they understood it better. As it stands, RSS isn’t so easy for the average internet user to comprehend. It’s not obvious enough. RSS needs a rebranding.

If I were Google and were taking Winer’s advice I would focus on rebranding the experience. A few things need to happen for it to be a success. First, RSS functionality needs to be as clear and fundamental as the address bar, tabs, and navigation. Place it where it won’t be missed. Second, make organizing feeds a smooth and frictionless process. It could still be folders and tags, but designed so it doesn’t take much time or thought. And third, advertise it. Make it clear how many feeds are out there (A: millions), and convince people of how much they’ll benefit.

If done well, I can’t see how it wouldn’t take off. People are devouring apps like Flipboard and News.me. Since Google wants so badly to be a Social player, Winer might be right in suggesting they pull in their reins on RSS. What I’m imagining isn’t too far off from Google Reader. They’re close. But Google Reader feels more like a feature than a central product. It just needs all the TLC they gave to the Google+ failure.

Clearly, Google wants what Facebook’s got. And I think Winer’s right in that it should tack in a different direction. Instead of attempting to clone Facebook, it should make the rest of the internet feel as Facebook-like as possible. Facebook is a walled-in garden. RSS is open ocean. If done right, there wouldn’t be the friction of a suspicion-inducing opt-in. If you’re reading on the internet, you could just as easily be doing so via RSS.

Another thought: RSS could also be a revenue stream for magazines’ online content. They could sell subscriptions to their writers’ feeds for $1/yr (or something), or subscriptions to different sections (i.e. Financial sections, Editorial, etc.), or, for a higher fee, to bundles, or maybe even every feed (the entire magazine) for a premium price. For example, I’m sure the New Yorker would make a pretty penny selling a subscription to only their fiction section. Even at $12/yr, I would probably buy that. I would certainly pay $1/yr for just the World in Review section of the Economist.

I don’t assume I’m the first to have these thoughts. But they make sense to me and I wonder what’s the hold up?

March 23, 2012

articles


Rocket Juice and the Moon

Oh, man. When was the last time you heard really good new funk? Funk that wasn't just programmed to hit the right beats, but music played by actual fingers and hands and aimed right at that place in your trunk that makes you involuntarily scrunch on the one? Probably the last time you listened to a twenty-year old Fela Kuti record. And that was probably because Tony Allen was nailing it. He's nailing it again on Rocket Juice and the Moon.

One of the biggest complaints about The Good, the Bad, and the Queen (the previous similarly outfitted supergroup by Damon Albarn) was that Allen was largely missing from the mix. And it's true. His drumming, though solid and in the groove, was never allowed to shine. That is absolutely not true on Rocket Juice and the Moon. He is the center, the sun, the all of it . . . and it's awesome. It makes me wonder why there are so few records which feature live drums so prominently. [Note to self to go hunting for them.]

Many of the songs are too short (the tracklisting is 20 deep, the album 62min.), which does a disservice to many of the grooves Allen provides. There's a reason so many Afro-Pop recordings are 20min+; the mind-blowing quality of the rhythm sections more or less demand it, and never once taxing my ever-decreasing American short-attention-span. So on more than a few tracks here you're going to be caught by surprise when your ears notice the fade out sliding in. But many of them do ride out long enough that you'll feel satisfied. There's a lot to like.

This is a classic Albarn project in that he's mostly obscured by the talent around him. Like with Gorillaz, if you aren't all too familiar with the way he works, you might not realize this was one of his records. It even took me, a huge Albarn fan and devotee, over a year to realize that the first Gorillaz record was actually a "solo" record, as were the following two. That's his M.O. Aside from Blur, which is a true band in the classic sense of the word, what Albarn does with his other projects is quintessential collaboration. All of us music lovers, I assume, have played the Dream Supergroup game, in which we try to imagine what would happen if all of our favorite musicians made a record together. This is essentially what Albarn has been doing the last ten years. For chrissakes, on the Plastic Beach tour, he had Mick Jones, Paul Simonon, Lou Reed, Mos Def, Tony Allen, Bobby Womack, and De La Soul all on the same goddamn stage. Jeeeeeeezus.

There aren't any tricks to Rocket Juice and the Moon, no gimmicks. This is what happens when you put a handful of truly gifted musicians in a room and hit record. Fatoumata Diawara, a rising star from Mali, gives great performances on the four songs she sings, especially on Lolo, her voice gliding goldenly over the trills and glissandos which are a signature of Malian (Berber?) style. There's a slight scratchiness to her voice which I find extremely alluring, and her debut album, Fatou, is apparently a must-hear. Erykah Badu is featured on three tracks, and her familiar voice is unexpected but a perfect fit. The Hypnotic Brass Ensemble impress at every appearance.

Rocket Juice and the Moon is also hard to classify, genre-wise. It's an African record, I suppose, and might be shelved in the World Music section of your local record store. When listening to any of the latest quote unquote world music records being released, there's usually enough hip-hop, soul, and jazz in there so that categorizing them is a worthless effort. This applies to R J and the M.

The record genuinely supports multiple listens. In fact, it probably won't be until the second or third turn that you are able to truly appreciate Flea's excellent contribution, or maybe even notice where Albarn fits in (excluding the two songs on which he provides the vocals). Tony Allen shines that brightly. And the record is that much fun. It's sunny. Perfect for a year in which March is seeing 80 degree days. I feel like we're all going to get out much more in 2012, our windows open more often, our skin tanned darker than in previous summers. In times like this, it's great to have a record which unashamedly celebrates life. It's hard to imagine who might be turned off by what these guys have made, and I suspect if it gets enough exposure, it will surely end up on plenty of Best Of 2012 year end lists. It's good.

March 22, 2012

articles


On Tinkering and Changing the World

There was a time when the edges of technological achievement were within the bounds of the common man, but we've long since left that time and that human-machine relationship. It was the age of the radio, the car, the home chemistry set. The new but everyday objects in our homes were mechanically advanced, but no so far as to prevent tinkering or achieving an amateur-level expertise over them. I'm thinking about the 1950s and -60s, and I suggest reading Richard Feynman's memoirs to get a sense of the fun and excitement that could be had from building your own microscope or fixing a broken television.

Recently, Neil deGrasse Tyson has been giving an inspiring talk all over the place, his thesis being that we've lost our passion for scientific discovery and exploration and we should get it back. He is most certainly right about that. The de-funding, and hence sunsetting, of NASA is a perfect example of how our interests have waned with regard to science, and space in particular. It's sad. But who's fault is it?

Can we blame politicians or our educational programs for the change in attitude? I don't think so. What happened was that so many of the objects we use everyday have advanced far beyond our ability to comprehend how they work. We can't tinker with them. We can't open them up, poke around, and figure out how they work on our own, and so we feel distanced from them. Most (all?) of our cars have engines packed under the hood so tightly that it's impossible for non-professionals to get inside them. Gone are the days Dad would jack up the car and roll under it with that board on wheels to fix a carburetor or whatever (I don't know anything about cars).

The space program was exciting because it still seemed possible any one of us might one day visit the moon, or maybe even live there. Fifty years ago, it was a far out thought but not universally considered unachievable. And at the time it was just as believable as a pocket-sized portable phone that could take video and instantly share it with someone on the other side of the planet. The mid-century utopian dreams for our living rooms and kitchens didn't take into consideration how far removed we would become from the machinations inside our technology. They assumed we'd always be able to understand and tinker with our possessions, but they were wrong.

I fear that we may never get back to a place where anything seems possible. We've become hyper-realistic about where the future is headed. When we do see something new, the reaction is usually a casual, passive fascination, not a passionate drive to become an active part of it. Science fiction went from being fanciful to dystopian to just plain dramatic fiction with spaceships or robots (a major generalization of the genre, I know). We've stopped looking outward and have turned our gaze more solidly upon ourselves. It's not only unadventurous, it is also cynical, and partly explains why we have become so self-centered.

How do we fix this? Maybe we've already started. If the problems which now catch our collective imagination are more culturally centered, we need to give everyone the ability to get involved. If we can't tinker with the inner-workings of our iPhones and televisions, maybe we can find a way to interact directly with culture itself. The Kony2012 thing was a disaster a bazillion times over, but it went viral because people really want to help (if passively), and if we can somehow make it easier for people to get involved with solving humanitarian crises, then I think we should. That might -- just maybe -- be within the reach of future technological advancements re our globally connected social networks and egalitarian informational resources. Can Facebook become a tool for actual positive change in the world, like solving world hunger or global warming? If your first instinct is to laugh, I bet your second is to think about it for just a moment. After all, Twitter played a not insignificant role in the Arab Spring revolution of 2011. So . . . maybe?

There are designers who are constantly calling on us to choose design projects which help rather than hinder. It's a worthy compulsion. And I think one of the ways we get closer to making this happen is to find new ways to tinker with our world (or point us back to some of the old). If we can reinvigorate a relationship with how everything works, maybe we might start dreaming big again.

March 20, 2012

articles


Modern Art Might Have Failed Us, But Design Has Not

In an article in the Huffington Post, Alain de Botton asks why our art museums have failed us.

The problem is that modern museums of art fail to tell people directly why art matters, because Modernist aesthetics (in which curators are trained) is so deeply suspicious of any hint of an instrumental approach to culture. To have an answer anyone could grasp as to the question of why art matters is too quickly viewed as 'reductive.' We have too easily swallowed the Modernist idea that art which aims to change or help or console its audience must by definition be 'bad art' . . . and that only art which wants nothing too clearly of us can be good. Hence the all-too-frequent question with which we leave the modern museum of art: what did that mean?

This is a fascinating idea. I think he's right about the corner Modern Art has painted itself into. The art sits/hangs in big white rooms, stripped of context, unable to explain itself. But I also think there's a more interesting question just off to the side. We shouldn't be asking why art museums have failed us, but instead asking what has taken the place of modern art in the public square. I think the answer to that is design.

I remember when one of the most prominent discussions in zines and journals was about the public reputation of design and the importance of avoiding the very issue modern art fell into, namely, not meaning anything. In the 1990s, there was a fear among professionals that Design might be lumped in with Modern Art as something purely aesthetic, and viewed as nothing more than a pretty skin pasted over content. Because of the increasing ubiquity of Photoshop and the growing ease of self-publishing on the Web, professionals were rightly worried their clients would assume they could do the work themselves, and that the prevalence of prosumer design tools would lead to cheaper pay rates for actual designers.

This didn't happen. And that's partly due to the campaign for proving the worthiness of design as something more than looks, as an occupation with depth and heft, that Designers weren't Artists. Design meant something. And if a particular design didn't mean anything, it wasn't successful design. I think most people hiring designers these days understand that when they hire a designer, they are getting much more than pretty new thing. They are also getting a package which includes more useful tools, smarter organization, and more efficient communication.

Design has gone much further than any other art at being "explicitly for something." Having escaped the clutches of being-for-itself, design was freed to solve problems once thought more in the purview of engineers, city planners, or politicians. It is now extremely common to see designers leaving client work to become developers, product creators, or initiators of social movements.

Many of these designers are art school graduates -- or art school dropouts -- and likely began their work in their youth by developing their style first. And it's probable they originally aspired to be career artists, not professional designers. That's kids for you. That was me. But growing up and growing smarter will change you. A serious study of art history leads towards a new understanding of context and of how art's cultural role was different in the past versus what it is today. You move from learning how important Duchamp's "Fountain" was in 1917 into having an awareness of our current ironic detachment from the absurdity of Dada. The eventual takeaway is that modern art no longer has the power to change the direction of culture. It did at one time, but it has lost its influence. We have moved on.

What has supplanted modern art as a cultural influence is design. This is why many of us designers abandoned our dreams of being Artists and instead proudly became Designers. Design is where the work is. Design is where the wheels touch the road. We talk an awful lot more about the design of the newest iPad than we do Cindy Sherman's lifetime retrospection show at MoMA. This is why de Botton is asking the wrong question. He's looking towards modern secular museums to show us something new, when it is real-world design that is right now actively making change and steering culture into new directions.

Try to imagine what would happen if modern secular museums took the example of churches more seriously. What if they too decided that art had a specific purpose — to make us a bit more sane, or slightly good once in a while or a little wiser and kinder — and tried to use the art in their possession to prompt us to be so? Perhaps art shouldn’t be ‘for art’s sake’, one of the most misunderstood, unambitious and sterile of all aesthetic slogans: why couldn’t art be - as it was in religious eras - more explicitly for something?

What de Botton is dreaming of and asking for is already here. It's not modern art. It's design. And it's not happening in museums. It's happening everywhere else. While most great art aspires to little more than the glory of entombment in a museum gallery, design has an actual, functional life in the greater outside world. de Botton is looking in the wrong place. Art museums are for things for which their time has passed. Design lives and breaths and exists with us everyday. It solves our problems. It influence our decisions in (hopefully) positive ways. And it can educate us and show us who we are.

There is no point in expecting modern art to be the influence in culture it once was. Not even a well-curated museum could return to art the power it used to have. It's wishful thinking. This isn't to say art is no longer moving, or beautiful, or shocking, or enlightening, or even occasionally important. It can be, and often is. But it doesn't solve problems, and museums won't replace religious institutions in providing answers to life's big questions.

I do appreciate de Botton's larger goal, which is to ask how we might find or create secular replacements for religious institutions. In his article, he never claims that art museums will replace churches, but only asks why not. I make no similar claim for design, except to say maybe it, and not art, might provide more possibilities.

Coda: I should add that though I believe de Botton has mistakenly neglected to consider design in his theory, I do agree with his suggestion for future museum curators. Museum galleries really might be far more functional if their works were "grouped together [. . .] from across genres and eras according to our inner needs." I would love to see that: museum visits as religious experiences. After all, what he's suggesting is better design.

Further Addendum: I'd like to clarify one thing about the above article. I love art and museums, and I spend a lot of my time looking at and thinking about art. Personally, when I'm in a gallery, rarely am I asking myself, as de Botton says, "What does it all mean?" My usual responses are "Huh", "Wow", "Sure", "Not quite", or "OH SWEET BAJEEBUS THAT IS INSANELY AWSOME". Neither do I think that artists working today are doing futile work. As long as time moves forward, we will need new expressions of what it means to be human. And I don't believe in ever closing the doors on any medium or creative outlet ever ever. We should be moving always upwards and outwards in every way all the time.

Modern art isn't over, as it might seem like I was suggesting. It's only less central to cultural progression as it used to be. Though it might be hard to imagine another single work of art, or even an entire movement, making big, new cultural waves, we should always be open to the possibility.

March 13, 2012

articles


Loss Aversion

At this morning's Creative Morning talk, Kirby Ferguson introduced the concept of loss aversion. The focus of the talk was his video series "Everything Is A Remix" in which he produced numerous examples of how original ideas are either copied, transformed, or combined from other influences to make something new. In other words, nothing is purely original. Everything is a remix.

While discussing the reasons people create, Kirby briefly mentioned 'loss aversion', the strong impetus for creation in order to preserve what we admire. We naturally want to protect things we like, whether it is an architectural landmark, an opera, a logo. It's an instinct, and it's one that I surmise all designers subconsciously operate with. As an idea, it's related to our love of sharing, of promoting what we believe deserves a greater audience, what we think should receive more respect.

We use the term 'shame' when talking about art that has been lost to history. As in, "It's a shame that the Buddhist statues were destroyed in Afghanistan." We feel a sadness at the loss of something great. The key is that this loss is subjective. Obviously, the Islamic group which dynomited the statues into rubble felt no love for them; they felt quite differently than I did. Our move towards protectionism is a precautionary measure, because we know that if we don't stand up for what we love, there's a good chance no one will. There's a chance it will disappear entirely.

Loss aversion is also related to the more well known, more academic theory which states that we create in a cycle of consummation and production. We produce to consume, and consume to produce, which was an idea attempting to explain the success of the industrial revolution. As designers we now live in a different world. We aren't producing in efficiency-constrained, mass-production lines. We are a service industry with non-linear processes not often easily defined. The work we make is never a duplicate of what we made before. It now seems to me that making work is like making a child. It's the fornication of ideas. (Don't be creepy.) Influences work because we blend their DNA with ours. When we see work we like, we want to see more of it, and the best, most direct way for that to happen is to make it ourselves. We take what we like and we add ourselves to it, making something new. To call it 'loss aversion' is to view it as propagating the species. We aren't producing as a result of consuming, but as a result of combining. It's like the selfish gene in genetic evolution. We prevent loss by giving birth to new work which was fathered by the work which inspired us. In so doing, nothing is lost, but the family tree is lengthened.

Googling 'loss aversion' returns only results referring to it as an economic theory, wherein the loss is a financial loss. But I think Kirby makes a smart move in connecting it with art. There is a fascinating psychological factor in why we want to save the art we love, why we are averse to losing it. Our most cherished works of art have a different kind of worth. They feed our brains with influence so that we may in turn cherish the work we make ourselves.

August 18, 2011

articles


The American, Ralph Waldo Emerson

For his book, Examined Lives, James Miller selected a small handful of philosophers and looked at the way in which their philosophy blended with their lives. Of the twelve, many are expected: Socrates, Plato, Aristotle, Descartes, Rousseau, Nietzsche. But the one which stands out to me, and who most peaked my interest, was Ralph Waldo Emerson. He is not usually listed among other canonized philosophers, and, further, he is the only American chosen by Miller. I was eager to learn what he had to say about the life of the American thinker.

Historically, we Americans have a long, love/hate relationship with philosophers, thinkers, and intellectuals. From decade to decade, public opinion towards men of higher learning has swayed strongly back and forth between rapt fascination to antagonistic distrust. Lucky for him, Emerson arrived at a moment exceptionally friendly to new ideas and ways of thinking. The lecture circuit was thriving, nearly 4,000 communities in the US hosting the public talks, and Emerson greatly used this to his advantage, eventually making it his prime source of income.

People everywhere, from rural farmers to wealthy aristocrats, wanted to participate in this new form of entertainment. Many treated the lectures as academic hobbies, while others, particularly the poor, viewed them as a way to better themselves, raise their stations, and supplement their children's increasingly improving education. It was a great time to be an intellectual. The public was exceptionally receptive to new and dangerous ideas.

Emerson was often criticized as a dangerous thinker for how he challenged people to question their faith. He was "denounced as a man with 'neither good divinity nor good sense', an 'infidel and an atheist,' a freelance mystic whose message would weaken entrenched bulwarks of social order." We think of him now as a more benign figure, an American icon known for his uplifting and inspiring philosophy of "self-reliance," a supremely American idea if ever there was one, but let us not forget what kind of trouble a man could get into when swimming against the current of Christian Protestantism.

Bertrand Russell said of the Romantics, a group Emerson was an exemplary member of, "[They] did not aim at peace and quiet, but at vigorous and passionate individual life." The philosophical atmosphere Emerson worked to create was intended to be transformative, if not revolutionary. It is debatable to which degree he succeeded. How many people did he lure away from their churches and temples to think for themselves? Did he effect any change in the central way Americans approached their faith?
There is no doubt of Emerson's influence on the academics, novelists, and philosophers who followed him. How the philosophy of self-reliance landed on the greater American public, however, is difficult to discern. One might think that such an outspoken critic of authoritative religion would have acquired a faster reputation for being so. But he is still taught in schools, and with a more than glowing representation. Could Emerson be our most celebrated American atheist? It may just be a matter of time before the same group pushing for Creationism in our children's text books to notice the tiger in the tall grass.

What protects Emerson to this day is the beauty of his writing. He was a prose poet who forwent hard-lined skepticism for the flowery and uplifting language of sacred texts. It is important to remember he was an ordained Unitarian minister graduated from the Harvard Divinity School. Emerson may have concluded that belief in a personal God was faulty -- "Prayer as a means to effect a private end is theft and meanness. It supposes dualism and not unity in nature and consciousness." -- but he knew better than to cast off the power of Biblical language.

Emerson thrived among the American people of his day. He lived as an intellectual, and though what he preached clawed mercilessly at the religious faith of Americans, we accepted him. We accepted him because he taught what we Americans know in our very bones, that the good things in life require work, and nothing is more inspiring than to be told "You can do it." What we so desperately need now is another Emerson, someone who can wipe all the clutter off our intellectual and spiritual lives, and give us the motivation and encouragement to be better. We can all be better.

March 10, 2011

articles


Now is a Needed Time

I do not recall the first time I heard Lightnin' Hopkins' song "Needed Time", but I do know I discovered it sometime just before March of 2003. The looming threat of an impending war with Iraq was building to a crescendo, and Hopkins' soulful, plaintive cry encapsulated exactly how I was feeling at the time: disenfranchised, neglected, betrayed, sorrowful, afraid.

I was 22 and had been feeling for the first and only time a palpable urge to be a soldier and to express my patriotism with a signature and a shave and a farewell to the safety of my American home. The attack on 9/11 gave a sense of purpose to my life, as it did for all Americans. But that feeling didn't last. It was swiped away by President Bush's misdirected decision to involve us in a different, completely irrelevant fight. I was pumped for the mighty American revenge on al Qaeda and bin Laden, but there was absolutely nothing in my heart for a war against Saddam Hussein. My electric patriotism faded into a cold, sweaty confusion. It was like standing at the edge of a steep cliff and accidentally dropping something important to you and having to watch it fall slowly down into the abyss and away from you forever. It was a needed time.

We could only watch. I was learning a powerful lesson about politics and life and destiny. So much of it is out of our control. There are bigger machinations happening all around us all the time. Gears turn and history grinds in dangerous ways. There is nothing safe about life. We all eventually get caught in the works.

And I listened to Hopkins sing this song with its spare lyrics, the barest of melodies, and its bouncing guitar rhythm. I listened and I got it. I too wanted Jesus to come by here, even if not for long. Just for a short while, for a little comfort, a little there there, it'll all be alright. And if not Jesus, you know, then something else. Anything else, because it didn't have to be Jesus. Hopkins called for Jesus, but any of the saints would have been fine, any of the devas or Bodhisattvas. Hell, any poet or even a librettist would do.

I kept listening. I played it over and over, night after night. The week the war began, I was in Paris with a friend and one night we laid awake and listened to it together on repeat. I believe he got it, too.

If I could have played it then for the entire world, everyone would have got it.

The more I listened the more I found buried inside. There are really only four small lines in the entire song, but the repetition builds a powerful scene. A man, alone, his eyes wide and searching, his voice pleading, his heart aching, but his resolve unwavering. It is a testament to and a tangible experience of faith's ability to straighten one's spine, to lift the chin, and to give strength to muscle.

The war in Iraq came and went. The war in Afghanistan goes on, malignantly. It is still a needed time, and it always will be. I again listen to the song and am consoled in whatever lays heavy on me at the moment. Hopkins continues to beg for help, keeps praying on his knees, and though Jesus never will visit, the sentiment is enough to console.

February 11, 2011

articles


The Age of Reason

I once read of a movie-goer who upon hearing a film's title spoken aloud as dialogue during the film would yell out a "Woot!" while seated there in the darkened theatre. I was in a restaurant, not alone, when I came upon this line of the novel:

"You have, however, reached the age of reason, my poor Mathieu," said he, in a tone of pity and of warning. "But you try to dodge that fact to, you try to pretend you're younger than you are."

and I nearly woot!ed out loud.

But what's funny about this phrase, "the age of reason," as the title of the novel is that it's misleading as an indicator of the heart of the book; it misses it by a hair. It tells you to be on your toes for philosophical discussion, that there will be some range of moral quandary, and that the human intellect is in the author's cross-hairs. All this is right. But at the moment you find the phrase in the novel (approx. halfway through) you find it's purpose in the novel isn't to point to the philosophy, but instead to the moment in a man's life, around the age of 30, when he must start acting rationally and reasonably in earnest, to take full responsibility for his own actions. Post 30 is the age of reason.

Boris, in particular, has a principled and cynical view of thirty.

Boris felt desolate, and the thought, the grinding thought, suddenly came upon him: "I won't, I won't grow old." Last year he had been quite unperturbed, he had never thought about that sort of thing; and how–it was rather ominous that he should so constantly feel that his youth was slipping from his fingers. Until twenty-five. "I've got five years yet," thought Boris, "and after that I'll blow my brains out." [pg.40]

Mathieu looked at him with a kind of shocked benignity. Youth was for Boris not merely a perishable and gratuitous quality, of which he must take cynical advantage, but also a moral virtue of which a man must show himself worthy. More than that, it was a justification. "Never mind," thought Mathieu, "he knows how to be young." He alone perhaps in all that crowd was definitely and entirely there, sitting on his chair. "After all, it's not a bad notion: to live one's youth right out and go off at thirty. Anyway, after thirty a man's dead." [pg.238]

What Boris is trying to avoid–and what Mathieu has found himself in the thick of–is the entanglement of responsibility. They want to be free of mandatory acts. When growing older it becomes inevitable that something will happen that ties you down, which restricts your movements, forces your tongue. For Mathieu that seems like death. We get at Mathieu's professorial explanation of freedom via Boris' recollection: "the individual's duty is to do what he wants to do, to think whatever he likes, to be accountable to no one but himself, to challenge every idea and every person." Right at the beginning Mathieu says to himself, "I'm getting old," and we prepare ourselves for impending trouble, an unwanted responsibility.

The heart of the book is this: Marcelle, Mathieu's girl, becomes pregnant. They decide to terminate the pregnancy, and though Mathieu spends most of the book trying to raise the money, he eventually realizes the moral implications of his situation, that if Marcelle has any inkling at all about keeping the baby (and she does), then he has no choice but to marry her. Mathieu cannot come up with the money, and he cannot send her to a budget, back-room abortionist. After the pregnancy, Mathieu is no longer accountable only to himself, but also to Marcelle and the unborn child. His old philosophy of freedom is stripped of him. He can no longer "do what he wants to do", but, having reached the age of reason, must act rationally. He must marry Marcelle. But why?

Sartre never had children, and he never married. He had a life-long open relationship with Simone de Beauvoir, and as far we know they never aborted any pregnancies. But the fear of it must have been there. If Simone were to become pregnant, what would they do? How would that affect Sartre's working philosophy? Where does abortion stand within existentialism? As a philosophy, existentialism is very compact and pragmatic, a simple idea at its core which then expands infinitely outward, becoming universally applicable. Sartre has two different ways of defining existentialism; one very austere and esoteric, the other more for laymen and the general citizen. We won't bother here with the first, the idea that "existence precedes essence" -- that takes an 800pg. book to explain -- but we can work with the latter, that "[Man] exists only to the extent that he realizes himself, therefore he is nothing more than the sum of his actions, nothing more than his life." If Mathieu were to push Marcelle towards abortion, he would be a man who pushed his pregnant girl into having an abortion. If Mathieu married his pregnant Marcelle in order to take responsibility for her and the child, then he would be a man who takes responsibility for his girl and child. All he needs is to choose: which man is he?

There is no question of the morality of abortion. That never comes into play. For in existentialism, there is no judgment of murder, of any act. Either one murders or one doesn't; it's a matter of action and self-definition, not of right or wrong. (Let us not forget existentialism is an atheist philosophy, so no heaven or hell to consider, no higher judge.) The moral problem is an interpersonal one, a humanist one. How does Mathieu's decision affect other people? How does it change their impression of him? How does it change him?
Mathieu has this revelation:

"They have lives. All of them. Each his own. Lives that reach through the walls of the dance-hall, along the streets of Paris, across France, they interlace and intersect, and they remain as vigorously personal as a toothbrush, a razor, and toilet articles that are never loaned. I knew. I knew that they each had their life. And I knew that I had one too."

And then, immediately, this:

"I've yawned, I've read, I've made love. And all that left its mark! Every moment of mine evoked, beyond itself, and in the future, something that insistently waited and matured. And those waiting-points–they are myself, I am waiting for myself on a red armchair, I am waiting for myself to come, clad in black, with a stiff collar, almost choking with heat, and say: ‘Yes, yes, I consent to take her as my wife.'

Then, later:

"No, it isn't heads or tails. Whatever happens, it is by my agency that everything must happen." Even if he let himself be carried off, in helplessness and in despair, even if he let himself be carried off like a sack of coal, he would have chosen his own damnation; he was free, free in every way, free to behave like a fool or a machine, free to accept, free to refuse, free to equivocate; to marry, to give up the game, to drag this dead weight about with him for years to come. he could do what he liked, no one had the right to advise him, there would be for him no Good or Evil unless he brought them into being. All around him things were gathered in a circle, expectant, impassive, and indicative of nothing. He was alone, enveloped in this monstrous silence, free and alone, without assistance and without excuse, condemned to decide without support from any quarter, condemned forever to be free.

Mathieu, post-30, taking his brother's advice and accepting his "age of reason", decides to propose to Marcelle; he will avoid the abortion. What happens next is a matter of plot and is inconsequential to the philosophy of the book. The moment of real importance to us, the close readers, is the moment Mathieu decides. He has chosen his action, has chosen for himself which man he would be.

The novel is episodic, follows the meandering lives of a number of protagonists (none so close as to Mathieu, however), and exemplifies a philosophy at work, what Sartre called his Morale (also known as his treatise Being and Nothingness). The lives of these characters are intertwined, they disect each other, run parallel at times then diverge or converge. What matters is what each does for the others, what they do to each other. We are given everyone's opinions of everyone else, Sartre shows us what they think through a narrative style that shifts loosely from free indirect to third person close. We see what they think, but what is consequential is what they do.

Nearing the end, we get a prophecy of what's to come, an undefinable sense of a foreboding future:

"This is going to end badly," thought Mathieu. He did not exactly know what was going to end badly; this stormy day, this abortion business, his relations with Marcelle? No, it was something vaguer and more comprehensive: his life, Europe, this ineffectual, ominous peace. He had a vision of Brunet's red hair: "There will be war in September." At that moment, in the dim, deserted bar, one could almost believe it. There had been something tainted in his life that summer."

The novel was set pre-war, writen during war, and published post-war. Sartre knew what terrible things were happening in history, and in The Reprieve, the second in the Roads To Freedom trilogy, we find these characters thrown haphazardly across Europe, their lives disrupted and uprooted. The road to freedom leads into the fog of war. More choices. More inescapable decisions that will change lives, that will change the world.

November 12, 2010

articles


5584