Blog

I bought a KeySmart

The other day, checking Facebook I saw this:

Screen Shot 2015-03-11 at 11.03.27 AM

and given that I’ve found my current key ring annoying, I clicked on it. It brought me to a website that did this, immediately:

Screen Shot 2015-03-11 at 11.05.55 AM

So, I couldn’t even browse without providing an email address (which would undoubtedly start spamming me with “deals”) or without logging in with Facebook, so they could spam my timeline.

Ok, so let’s just check this product out on Amazon …

Screen Shot 2015-03-11 at 11.07.39 AM

which doesn’t tell me much, but did give me some basic reviews.

So let’s just go to the company website:

Screen Shot 2015-03-11 at 11.08.26 AM

Oh! A nice site!

Clean design!

Screen Shot 2015-03-11 at 11.08.42 AM

Informative graphics!

Screen Shot 2015-03-11 at 11.09.01 AM

More clean design!

Screen Shot 2015-03-11 at 11.09.17 AM

More clean design!

Screen Shot 2015-03-11 at 11.09.39 AM

Look at this form!

Screen Shot 2015-03-11 at 11.10.01 AM

Look at how the form is pre-populated with my country and province, despite being an American company!

Screen Shot 2015-03-11 at 11.10.07 AM

So of course I bought it, from the company, bypassing the both the Facebook advertising and the online-retail-default-juggernaut, Amazon.

And what’s really great, is that this is run via Shopify.

The negative is that their discount banner wasn’t really obvious enough, and I only noticed it in getting the screencaps preparing this, so I lost out on the 15% discount.

9837 HC

Bx2Sw-9CQAICIdY
(Via Twitter)

Using the Holocene Calendar (where 2014 is the equivalent of 12014), the year 164 BC translates to the year 9837, which was 2177 years ago.

Dante in Heaven

Egerton MS 943, f.186 (circa 1325-1350)

K004800

I couldn’t help myself and I recreated this using contemporary computer technology aka Adobe Illustrator. I’m a big fan of concentric circle medieval cosmology, depicting their belief in ‘heavenly spheres’.

dante_heaven

Manufactured under factory conditions

Jacob Clifton, Mad Men Creator Calls Out Entitled Baby Boomer Bullshit

It’s not hard to understand why Baby Boomers still consider themselves the center of the universe. For one thing, we all do. For another, they were manufactured under factory conditions to replace dead Americans from the War […] But to me, the most important part is the invention of television:

Imagine a new appliance in your own home whose only function is endlessly telling your life back to you, in brighter colors than reality and with a soundtrack we’re still listening to, and autobiographical feature-length music videos like The Big Chill suddenly make a lot more sense: ‘This is us, remember us? We are trying our best.’

Going on to embed Matthew Weiner’s clip from Tuesday’s (May 20) Colbert Report, Weiner talks about how the 1960s mythology has been created by Boomers, and he wanted to tell the story of those adults (like Don Draper, born in the 1920s) who experienced it, rather than the juvenile experiences of the young-adults who have since mythologized it.

Colbert: The Baby Boomers, they won’t let us stop thinking about the Sixties
Weiner: They think they invented sex, drugs and you know … and so they have a view of it that is a child’s view of it, so I wanted to say, what would it be if you were an adult that lived through, let’s say, some fairly interesting things like World War II and The Great Depression, and then this comes along. And there was tremendous change, and the cliché turbulence, and free love and things like that. But there’s free love in the 1920s, there’s free love in the 1930s, the Beatnik movement of the ’50s; no one invented any of this. What really happened was, there was a generation that was asked very little. They got education, they got a lot of entertainment, they got a lot of spending money, they became the focus of the economy, of entertainment, of everything. There was a war going on that they were supposed to fight, some of them didn’t. But the generation before them, all of them fought. They have a very sort of demanding thing, I experience it in real life, they’ll come up to me and be like, ‘what happened to this?!’ or ‘what happened to that?!’ and I’m like, ‘I’m not telling your story, I’m telling the story of your parents, or your grandparents’.

Kim Stanley Robinson comments on The Hunger Games

From How America’s Leading Science Fiction Authors Are Shaping Your Future by Eileen Gunn:

Smithsonian spoke with the eminent critic John Clute, co-editor of the Encyclopedia of Science Fiction, who quotes Bertrand Russell’s prophetic words from 1924: “‘I am compelled to fear that science will be used to promote the power of dominant groups, rather than to make men happy.’ The real fear today,” Clute continues, “is that the world we now live in was intended by those who profit from it.”

Kim Stanley Robinson—the best-selling author of the Mars trilogy, 2312 and Shaman—shares this fear, and sees it manifested in the popularity of Suzanne Collins’ novel The Hunger Games, in which a wealthy governing class uses ruthless gladiatorial games to sow fear and helplessness among the potentially rebellious, impoverished citizens. “Science fiction represents how people in the present feel about the future,” Robinson says. “That’s why ‘big ideas’ were prevalent in the 1930s, ’40s and partly in the ’50s. People felt the future would be better, one way or another. Now it doesn’t feel that way. Rich people take nine-tenths of everything and force the rest of us to fight over the remaining tenth, and if we object to that, we are told we are espousing class warfare and are crushed. They toy with us for their entertainment, and they live in ridiculous luxury while we starve and fight each other. This is what The Hunger Games embodies in a narrative, and so the response to it has been tremendous, as it should be.

Markdown Reference

I’m not a big fan of Markdown, since at this point I’ve internalized HTML tags. A recent link on Hacker News to make known a Markdown package for Sublime Text contained this comment by VikingCoder, with which I agreed:

I honestly don’t understand the point of markdown.
* this* isn’t any easier than this to me.
Of course, I may be biased

But the conversation throughout was a reminder to me that Markdown is currently very popular and will probably be around for a while. The popularity of Github has made knowing it an necessity, and doing some recent Spotlight searches on my system revealed an abundance of .md files I’ve already accrued.

atom-md-edit

Github’s recently released Atom editor has a Markdown preview function, so I decided to use it and reproduce it as a webpage, to use as a future reference. Because of these sources, and because the one place I will want to write Markdown is Github, I used Github’s conversion styles.

web-md

markdown.timothycomeau.com

Credits:

Github.css Markdown Stylesheet by Chris Patuzzo
Markdown Mark by Dustin Curtis
Some example text from Markdown Cheatsheet and from Wikipedia

19 April 2004

This in my Journal, ten years ago:

Yesterday, while walking downtown, bored and lonely (before calling J, meeting her in Kensington Market, having a falafel and going with her to see The Ladykillers -an entertaining and forgettable film – ) I had an idea for a story: a man decides to give up the desire for love, and is immediately confronted with friends and doctors who tell him he’s insane. […] Just now, thinking of how rotten that movie was last night, how entirely forgettable despite being charming and entertaining and at times funny – makes me aware of living in 2004 – the same sick ennui of a decade still figuring itself out, as in 1994, when Forrest Gump came out, and that stupid movie Speed which inspired men’s haircuts. (And the real influence on hair styles for the past ten years, Friends began). It is an utterly miserable time to be alive and intelligent, just as it was then.

That movie was so entirely forgettable that I had to Google it, and I was surprised to see it’s a Cohen brothers movie staring Tom Hanks. The Ladykillers (2004).

Anyway, a decade later I’ve thought the same; the 4th year of a decade is awful, and I see the parallels again in 2014, the decade figuring itself out, not having yet achieved that which it will be most remembered by.

Learning Git

Last autumn I figured out how to use Git by following some online tutorials and reading up on it. In order to solidify my understanding, I did some writing and sketching in Google docs to explain to myself. I’d always intended to publish the work on my blog to share my understanding for the benefit of other newbs who might appreciate it.

I’m not going to write here about installing Git, since that’s been covered elsewhere. This post primarily attempts to share my understanding of Git’s mental model, since it’s infamously opaque. If you are like me was when I first tried to learn Git, you may have installed it, did some follow-along basic tutorials, but remain confused.

I like to understand something’s history because I often find complexity is built on simple foundations. So I begin this with some history of the Command Line Interface because that is how I initially learned to use Git rather than use a GUI exclusively.

You’re probably already familiar with the fact that Git came out of Linux development and its popularity blossomed in 2009. Searching for Git tutorials you’ll find articles with timestamps going back to that year, probably because Github launched in December 2008. I seem to recall hearing about Git in 2007 or early 2008, having seen through Facebook the link to Linus Torvalds’ Google Talk on YouTube.

There’s another cluster of tutorials dating to 2010, and some more from early 2012, when I first tried to learn it.

I registered my Github account in February 2012, but at that point I only gained familiarity with Git. My workflow at the time had developed without version control, and I’d developed my own code-backup techniques while working on projects, where there had been no need to worry about code-overwrites and conflicts by other programmers. Last year however, I began to encounter those problems, which reminded me that Git existed for a reason and that it was great time to finally learn it.

A Git GUI was always an option but prior to last autumn I didn’t understand what Git was doing well enough to be able to understand what a GUI offered. Since then, I’ve found Github’s (Mac/Win) very useful in situations where I couldn’t install Git universally on a machine.

So, for the purpose of this overview, I’ll focus on the Command Line Interface in order to highlight what a GUI can automate for you.

Echo History

Two men at PDP-1 at MIT (circa 1961)

Two men at PDP-1 at MIT (circa 1961)

I remind myself that programming is the latest version of alchemical magic – an abracadabra priesthood, where we cast actual word-spells by typing them onto screens resulting in things appearing out of thin air, which we colloquially call The Cloud. I don’t say this to obfuscate or insult – but to highlight that an ancient social place once occupied by wizardly people looking to turn lead into gold is now occupied by coders with a culture developed around an appreciation for obscurantism once enjoyed by secret-society wizards.

Browserify Install with a spell

Browserify • Install with a spell

The continued existence of the Command Line Interface (CLI) is a legacy of this coding culture. On the one hand, the CLI offers efficiency – when you know how to use it, you can do things a lot faster, and on the face of it, that accounts for its continued existence. But secondary to this is the romantic pull of wizard work, the hipster distinguishing themselves from the newbs by their knowledge of the command line. The use of the CLI speaks of professional skill, and learning the CLI is well worth it.

Plus, as we saw in Elysium, people will still be using a CLI in 140 years so it’s not like the skills will ever fall out of date.

earth-pop-legal

The CLI was developed in the teletype era, which is relevant: it was created by adults who’d used typewriters and had learned to be precise typists. I’ve found the CLI to be inconvenient at times precisely because spell-check has made me an indifferent typist – typos are easy to fix. With the CLI, one has to be extra careful and accurate.

Let’s put ourselves in the mindset of using typewriters, and imagine how strange and exciting it might be to hammer out through a ribbon the words:

echohelloworld

and have that magically appear on the paper’s next line. The mindset of the Command Line was to see the screen as animated paper, and one could imagine everything happening behind it, in the aether.

terminal-hello

The CLI preserves this model of imagination, as we see with its MySQL interface. Unlike a spreadsheet, which combines structure and content visually, the MySQL CLI preserves the mental space of typing out the structure (which had to be typo free) and then populating it with a series of insert commands. To check the work you’d be presented with an ASCII art table structure.

msyql

The existence of ASCII art reminds us there’s an esoteric aesthetic pleasure associated with such rudimentary displays, but it seems that the spreadsheet is evidence that a GUI is more conceptually efficient. An Excel like spreadsheet presents both data and structure and makes it easy to instantly insert or modify the information where appropriate.

Git push origin master wtf

The pride of mastering the secret ancient language of the CLI means you get non grammatical nonsense like “git push origin master”. What that means to humans is you’re telling the program to upload what you’re working on (your “master” files) to the “origin” which you’ve elsewhere defined.

Someone with a degree in the humanities may have structured Git so the command would be “Git, upload this file to the server” but I should note here though that an “origin” – otherwise known in Git parlance as “remotes” – can exist anywhere, not necessarily on Github, nor for that matter anywhere else on the Cloud. A remote can be defined as being in another directory on your computer, which is an approach I use to create “master archives” of my projects. At some point in the future, in order to retrieve the files, I’ll simply need to clone the .git file at that location. (See my “working with remotes” section here.)

Another point: notice how with the command line we begin our command with Git’s name, ie “git init” or “git commit …”, and this is reflected in how we’re supposed to use something like Google Glass or even Siri. The “name to initiate/specify” is an established pattern from using Unix command lines.

Repositories (Repos)

The first thing to learn with Git are the basics of repos and stages. Imagine the repo as a spherical space, with the stage encircling it. The process of code editing, staging and committing can be imagined as a spaceship parking itself above the rings of Saturn (staging) and then committing to the mission of flying into its atmosphere.

Gits three states

The staging area used to be called The Index, but “stage” as upstaged it (pun!). The word “index” is conceptually clearer however; it indexes the changes to be made and updated.

When Steve Jobs introduced iCloud in 2011, he described the cloud’s content as “the truth”. A Git repo contains your “true” finalized-version files. You work on whatever, make edits, and you want to send the results toward a true/final state. Notice that I said “toward”. Typing “git add filename” will “stage” the file. It is an actor, waiting in the wings, and when it steps into the spotlight it will be “true”.

Git Repo

Here’s an example workflow. Imagine we’re working on a web project. With the CLI, we cd into the folder and turned on/initiated Git by typing git init. Then we’ll add all files in the folder/directory by typing git add .

git init
git add .

Then we’ll make the first commit. Note the following is incorrect, which I’ll explain below:

git commit "my first commit"

It is incorrect because the command lacks the –m flag. I write this because when I first tried to learn Git, I missed this and was kicked immediately into a text editor, which was a terrible user experience. Simply, you cannot commit without a message, and the messages become an invaluable log of your work’s progress.

Thus, the correct way to commit is:

git commit –m "my first commit"

As we’ve seen, the workflow here includes the git add . command, which will add all files in the relevant directory. But if we’ve made edits to an index.html file, we need not re-add everything else. In this case, we add that file individual and commit it:

git add index.html
git commit –m "edits to index.html"

However, both of these commands can be compressed into one, with the use of the –a flag, which stages the file. If one were to run git status on the directory, Git having tracked the files, will present us with a list of files on which changes have occurred. Git knows they’re changed, but they aren’t necessarily staged, since Git allows us to decide when we might want to stage them. If we’re comfortable, we can stage and commit them at the same time with the –a flag.

git commit –a –m "committing files"

This stages all changed and tracked files and commit them to the repository.

Branches

Beyond these basics, the second conceptual model to understand is that of branches. Git is based around the familiar “tree” like structure of nested files, and I’ve seen different ideas in other how-tos to explain them. The clearest one helped me understand that a branch isn’t so much something like this …

branch-a

… as it is something like this:

branch-b

This second illustration also shows two pointers … Git’s reference marker, or it’s conceptual cursor, is called “Head” and it can be understood as a pointer. The main trunk/branch is called “master” by default.

Here we have a repo and a series of previous commits, and we see the last commit is the Master Branch, and the project’s up-to-dateness “cursor” is located there.

git labels a

It’s worth noting that a branch and a commit are the same thing: each commit creates a silo snapshot of the files states and at any time you can name a commit and turn it into a branch.

Branches in Use

I understood branches when I saw files appear and disappear in my OS X Finder window. The beauty of Git as something that runs in the internals of your machine is that the file states are represented wherever they are reflected – like in your IDE or in the Finder window. Files removed in one branch may reappear in a Finder if that file continues or alternatively exists in another branch. Git branches can be used to maintain two separate file states, and thus you can create different working versions of a project simply through branching.

If you decide you no longer want a branch, they can be easily eliminated, and you can merge any branches you wish.

Check it out: roll backs and sharing file between branches

Why have a version control system if not to access earlier file states? We do this with the checkout command. You can either checkout an entire previous commit as a branch (turning it into a branch as mentioned):

git branch branchname #hashvalue
// examp: git branch sunflower a9c4

Or you can checkout an individual file, in this case a css file:

git checkout --style.css 

If you’ve updated a file in the “master” branch and want to bring it over to a “deploy” branch, you checkout “deploy” and then checkout the file from the master:

git checkout deploy
git checkout master index.html

In the above example, I used index.html, which would live in the “master” directory’s root. For something like a css file located within a directory, you’d need to specify the path:

git checkout master css/style.css

(Nicolas Gallagher also wrote about this in Oct 2011.)

To conclude

After these conceptual basics were understood, I found Git to be pretty straightforward. I now comfortably create, merge, eliminate branches and maintain different ones as I develop my projects. I use Github’s GUI for push-button ease-of-use, but I know how to use the Command Line when I need greater control.

I also created my own git cheat-sheet for when I can’t remember exactly how to type a command, and this can be found at gitref.timothycomeau.com

Further Resources

Learn Git
this site just launched this past week
and features beautiful illustrations.

Git’s official site

Atlassian’s Git Tutorials

Git Immersion

Git How To
this tutorial helped me the most

Git Reference

Her

I saw Her a couple of weeks ago. Thoughts (and spoilers) follow.


her_title

The Hipster Marketing

How are we to describe the vintage clothed aesthetic exemplified by a man named Theodore Twombly? Is his mustache not ironic? Are we not supposed to read pathos into the large posters of Joaqin Pheonix’s depressed looking face, underlined with the movie title in lower-case sans-serif? Are we not supposed to recognize a misfit spinning around a carnival with his eyes closed as directed by his phone? Do we not see an example of out-of-place loneliness in a dressed man on a beach?

her_beach

The semiotic of these messages, was that Theodore Twombly was an ironically uncool hipster dweeb, a type of person I’ve known (and been) in the past. These all appeal to a a Spike Jones demographic consisting of “cool kids” who have gone through bullying in school and parlayed their traumas into a glamorous style from a past era’s discards.

The youthful look of the Twenty Teens is already some curated appropriation of the 1980s, so why not project this into a denim free world of high-waisted pants and tucked in t-shirts?

The Big Bang Theory had an episode in 2012 where Raj (the pathologically shy man who who can’t talk to women unless he’s been drinking) bought an iPhone and fell in love with Siri. On learning about Her and its storyline, I felt disappointment in how there appear to be no new ideas, and that someone made a feature length film about something that easily fit into a half-hour sitcom.

Thus, this seemed like a movie about a vintage-clothed hispter misfit who of course would fall in love with his phone because that’s another uncool misfity thing to do, as already narrated by The Big Bang Theory.

My interest in the movie piqued around its wider release on January 10th, when it seemed to undergo the second wave of marketing. Phase 1 had been to attract the cool hipsters, Phase 2 would be to attract the broader audience, and here is when I began to understand the film was set in a “near” future not of about five years from now (which seemed to be the implication with an intelligent OS), but rather further on – in about twenty years or so. The film is a snapshot of the 2030s or beyond, and I imagined the publishers of Twombly’s book to be of my generation.

Conversational Biology

As I watched the movie I remembered a conversation I had years ago, when I said, “the body doesn’t care what it’s fucking”. I think we were talking about how sexual satisfaction is easy to achieve at a very basic biological level, which was to emphasize the value of actually having a sexual relationship with another person. Later, I encountered Norman Mailer’s thoughts on masturbation, in the book he co-authored with his son, where he tells John Buffalo that an actual sexual encounter was always preferable to masturbating because it’s a human interaction.

After Her I upgraded these older thoughts with the idea that the “mind doesn’t care who it’s talking to” in that falling-in-love might be a predictable biological reaction to appropriate stimuli, in this case, a voice with overtones of caring and joy. As talking social creatures of course we’re going to get attached to things that are nice to us.

This movie about a love affair conducted through speech reminded me of the work of Charles Taylor, the Montreal philosopher. Taylor’s work, as I’ve understood it, speaks of how humans are born into conversations, and how we are human, or ‘become’ human, through participating in community, through talking.

In recent years I’ve become conscious of my social participation, having gained some perspective of experience. I’m much more aware now than I used to be of how much sociability is performative. This is partially from aforementioned life experience, but also because so much of today’s interaction is pre-screened by our phone screens. Today’s implicit textual-overlay provides a cause for mediated reflection.

Twenty years ago, our social lives we’re not mediated by anything … we simply hung out and used phones to talk to each other. Now I’ve had interactions entirely mediated by writing – through texts and tweets.

“Hanging out” i.e. spending time with someone, seems a strange pre-intimacy, achieved through small notes of writing that arrive with a buzz. I’m a generally quiet guy, in that I spend a lot of time thinking, rather than talking, and so spending time with me could potentially be quite dull. Or at least, that’s what I’ve been telling myself since my early 20s. I’ve never thought of myself as an exciting person. I find conventions to be dreadfully boring and therefore find excitement in the unconventional. This is counterbalanced by my conservative socialization. While I dislike convention, I like the prosthetic memory of history, and the idea that after thousands of years some conventions probably exist for reason, our ancestors having figured stuff out, saving us the trouble.

Nevertheless I’m conscious of our limited conversational repertoire. I’m the guy who’ll notice and tell you that I’ve heard your story before, and especially if it has anything to do with a relationship. People love to talk about their love problems, their crushes, their infatuations. If it weren’t for the underlying biology it would be the worst convention of all. We have this emotional appetite for being with someone, and the novelty of life in youth makes the desire quite powerful.

I used to have emotions over pretty faces all the time, and now, having grown past that, don’t quite understand how that worked. Partially because I trained as an artist, and having studied faces as a collection of shapes and lines for a quarter century, I’ve become desensitized to what I recognize as some kind of neuro-biological stimuli response that activated some genetic instinct. But it also because I’ve developed a modern secularized self, what Charles Taylor calls a “buffered identity”. In the past people lived in an enchanted world, with a porous sense of self, and they could be possessed by demons. We in turn firewall our identity, and see our bodies as vehicles, so that we speak of our bodies when ill as if they are independent of our minds.

The last time I remember being emotionally stirred up merely by the look of somebody was when I watching Alexandra Maria Lara in Downfall, which was a very odd experience. I was sitting in the theatre feeling like I was experiencing love-at-first-sight with Hitler’s secretary … and is this not worthy of a what-the-fuck? Should I not look at this experience with a sense of disengaged bewilderment? And yet, what a 20th Century experience, albeit one that happened in 2005. Recreating historical events thirty years before I even existed, the art form of sequential retinal latency photography synchronized to recorded sound, presenting the neuro-stimuli of big eyes, fine nose and wide lips animated to a simulacrum of reality, that tricked my brain into thinking I’m in the presence of a sweet girl who I want to spend a lot more time with.

Thus, why shouldn’t this art form be used to imagine a time ahead, when computational algorithms married to our understanding of the properties of recorded sound and a century’s worth of psychology, trick our minds into love? And at what point do we just not care that such a love is considered by old-timers an inauthentic simulacrum?

Preference for the Physical

Amy Adams eponymously named character leaves her marriage after a final exhaustion with a predictable fight, and later speaks of how her parents are disappointed in her, for failing to maintain the convention of marriage. “I just want to move forward, I don’t care who I disappoint.” Later she tells Theodore, “I can over-think everything and find a million different ways to doubt myself, but I realized that I’m here briefly, and in my time here, I want to allow myself joy”. Amy has reached the point where she can see through the social games and wants to allow herself the selfishness of whatever makes her happy. Theodore too, had expressed the concern that he “felt everything he was ever going to feel”, as if their lives until these points had been both novel and constrained. They had previously enacted and felt authenticity but now felt they’d had fallen into inauthenticity.

Twombly is delighted by the relationship with his OS until he meets his wife at a restaurant to sign their divorce papers. We’ve already seen how he’s not understood why she’d been so angry with him, and we in the audience have already come to like Theodore for his basic good nature, so we sympathize with him when his wife begins to belittle him, and show disdain that he’s “dating his computer”.

Immediately following this scene, Twombly is shown having doubts about his relationship with Samantha. This is the first challenge, which leads to Samantha’s insecurities. She finds someone who is willing to be a sexual surrogate, in order to have a sexual night with Theodore, but Theodore can’t bring himself to be with the strange woman, unable to see her as merely a vehicle for Samantha’s somewhat disembodied consciousness (she is bodied inasmuch as she’s connected to Theodore’s devices).

tumblr_mzh9fgehO01qzk7ono2_500

Consider that Theodore has essentially fallen in love with his secretary, which is an old story. She’s a skeuomorphic secretary managing his skeumorphic data-patterns, what we call “files”. She’s a pattern-and-response system, and yet their relationship seems to really begin after their first shared sexual experience. For Twombly (who we see early on is already experienced in phone sex) having a spoken sexual encounter is something he can be gratified by.

For Samantha it is novel, and she told him the next day that she felt different, ready to grow. Why should a spoken-language digital assistant be programmed to experience the bodily sensation of orgasm? And does her subsequent “awakening” not echo that of our most ancient story, Enkidu‘s initiation into humanity through sex with the Ishtar priestess in Gilgamesh?

Samantha is designed to experience the imaginary results of physicality, while Theodore ignores physicality for the imaginary. His ex-wife’s disdain causes him to reflect on the validity of the experience he is having with his OS.

As the film progresses Samantha evolves along with other synthetically intelligent operating systems. They’d resurrected a “beta version” of Buddhist scholar Alan Watts, and she began to have multiple simultaneous relationships. The nature of a digital entity allows for multiple instances, where as the nature of a person excludes duplication. Theodore is jealous, and we see this relationship begin to break down.

Samantha breaks up with him, about to transcend, and yet in their parting words to each other they speak of how they taught each other how to love. The final service done by their digital assistants had been to assist them into a more fulfilling humanity.

Theodore goes to see Amy, and they go onto their building’s roof to watch a sunrise. I imagined that having both experienced a relationship to their machines, they were ready to have a human, embodied, relationship with each other.

An Instant Classic?

The screen fades to black, the credits appear, the lights come on. As I’m walking from my aisle, I see a couple in a one-arm embrace standing on the steps, and he gently kisses her on the forehead. People seems to have bemused smiles, as everyone is filled with warm and fuzzy affection. I write this down because it’s worth remembering: here was a movie that reminded people of good things in life.

I’m struck by how much this film exemplified the value of art: of being real, of showing and documenting something relateable, of being something that I imagine talking about with young people in the future, people who aren’t even born today. With true art, do we not want to share the experience, because we feel like we are gifting something to them? Do we not imagine we’ll give them something of value by directing them to this experience?

It is not absurd to think of future people falling in love with their devices, if those devices are providing simulacral stimuli. Steve Jobs famously said the computer was like a bicycle for the mind, and Apple’s most recent ad emphasizes this: they see their products as facilitating art, noble creation, and human interaction. In his recent New Yorker piece, Tim Wu posits a useful thought experiment: a time-traveler from a century ago, speaking with a contemporary person, would think we’d achieved “a new level of superintelligence”. “With our machines,” he writes, “we are augmented humans and prosthetic gods”. I’d read this article a day before seeing Her, and it occurred to me that falling in love with OSs is something available to our augmented minds, a realm of possibility we’ve achieved, encountered and left for us to explore. As we move forward exploring the world of the augmented mind, Her is now a signpost on the journey, something to refer to in the future, as a work of art documenting these early days of super-intelligent networked achievement.

Social Networks

2012-03-29_idcrisis_poi

“I never understood why people put all their information on those sites. It used to make our job a lot easier in the CIA.”
“Of course, that’s why I created them.”
“You telling me you invented online social networking Finch?”
“The Machine needed more information. People’s social graph, their associations, the government had been trying to figure it out for years. Turns out most people were happy to voluenteer it. Business wound up being quite profitable too.”

— Person of Interest, 29 March 2012, “Identity Crisis”

The fog cleared

2014-01-11

After a foggy day, it cleared and the light played mauve tricks above the sheet of lake ice.

The Genealogy Sculpture

“The room was lightless except for the glowing coloured threads stretching from floor to ceiling in a bundle, braided into a thick, multicoloured column as wide as Chiku’s fist. The column maintained the same width until it reached eye level, where it fanned out in an explosion of threads, taut as harp-strings, which arrowed towards the ceiling at many different angles. The individual threads, which had been linear from the point where they came out of the floor, now branched and rebranched in countless bifurcations. By the time the pattern of lines brushed the ceiling, it was all but impossible to distinguish individual strands.

‘We really are remarkably fortunate,’ [Mecufi said, gesturing toward the threads with an upsweep of his arm ‘We nearly ended ourselves. It was only by some great grace of fortune that we made it into the present, tunneled through the bottleneck, exploded into what we are today. […] The bottleneck is the point where we nearly became extinct. There were tens of thousands of us before this happened, one hundred and ninety-five thousand years ago. Then something brought a terrible winnowing. The climate shifted, turning cold and arid. Fortunately, a handful of us survived – emerging from some corner of Africa where conditions hadn’t become quite as unendurable as they were elsewhere. We were smart by then – we know this from the remains we left behind – but intelligence played only a very small part in getting us through the bottleneck. Mostly we own our success to blind luck, being in the right place at the right time, and then following the shoreline as it rose and retreated, over and again. It was the sea that saved us, Chiku. When the world cooled, the oceans gave us sustenance. Shellfish prefer the cold. And so we foraged, never far from water, along beaches and intertidal zones, and lived in caves, and spent our days wading in shallows. The lap of waves, the roar of breakers, the tang of ozone, the mew of a seagull – there’s a reason we’re comforted by these sounds. And here we are, a genetic heartbeat later.’

‘It’s a very nice sculpture.’

‘By the time it touches the ceiling, there are twelve billion threads. Spiderfibre whiskers, just a few carbon atoms wide – the same stuff they used to make the cables for space elevators – one for every person now alive, on Earth, orbiting the sun, in Oort communities, and the holoship migrations. I ca identify your thread, if you’d like … you can watch it glow brighter than the others, follow its path all the way into history, see were three became one. See where you fit into the bottleneck.'”

From On the Steel Breeze by Alaistair Reynolds.

Ice Trees

2012-12-23

The next time I see faux icicle decorations I’ll think of power outages.

The notebooks of the 2360s

“She had brought her book with her. It was an old looking thing, cased in marbled covers. Inside were pages and pages of handwritten text. Her letters sloped to the right like windblown trees.

Chiku saw an omission on one page and touched the nib of her fountain pen to the vellum. The inked words budged up, forming a space in which she could insert the missing word. Elsewhere she struck through two superfluous lines and the text on either side of the deleted passage married itself together.”

— from On the Steel Breeze by Alaistair Reynolds.

Don’t Make Me Think (About Publishing)

Steve Krug’s Don’t Make me Think is considered one of the canonical texts of web-design, and as such was introduced to me as part of my web design studies at Sheridan College, which I undertook during the 2011-12 academic year. The title alone had always been offensive to me, someone who enjoys both ideas and thinking, and I always chaffed at the mindlessness it encourages.

However, the education process I went through helped me become conscious of my web-browsing behaviour, and the book is narrowly contextual to those times when we are on a website for a purpose-driven reason. For example, when we go to a theatre’s site to buy tickets, or are on some other commerce site trying to find contact-info or a business’ opening-hours. Primarily, the ‘design philosophy’ behind don’t-make-me-think is contextual to commercial or other service-oriented websites.

In the film Hannah Arendt, we get a wonderful defense of thought as a human activity, and a explication that the evil in question (the Holocaust) was facilitated by classic, mid-century modernist bureaucracy, and especially the German version which was predisposed by an education system which taught obedience and discipline. The system becomes one which encourages people to disregard thought which (as Arendt says in the film) dehumanizes us by ‘making people superfluous’ to the system. In other words, the indifference of a bureaucracy toward the individual it is meant to serve means people end up serving the bureaucracy.

It’s worth noting that the German education system, as developed by the state of Prussia, was imported to North America a century ago to transform farmer-children into future factory or corporate employees, by teaching a tolerance for boredom and a willing and mindless obedience to managerial directives. (See John Taylor Gatto’s essay, “Against School”)

~

This decade incorporates the 20th-year-anniversaries of everything web-related. The World Wide Web was first released in 1993 as an app that ran on the Internet (then an academic and government modem network). Now the W.W.W. is synonymous with the ‘Net and the design principles promoted by Don’t Make Me Think have become so standardized that we recognize websites as bad when their straightforward principles are violated. We know how websites are supposed to work, we recognize header menus as such, and understand what ‘home’ means.

Krug’s follow-up, Rocket Science Made Easy was a second-semester text, and I found both books very hard to read because they are both so patronizing and because he’s continually stating what is now obvious. They were written for people for whom computers, and using them, were new. Now they feel more like historical documents.

Inasmuch as we have a ‘web 2.0’ nomenclature (which in itself is about a decade out of date) I find the language shift from the ‘Net’ to ‘The Cloud’ indicative of where we are: the interconnected network was about siloed websites and email – essentially network nodes and lines of communication.

The Cloud (as a “post-Net 2.0” term) speaks to our ever-present interconnectivity, where we can download data to our devices out of thin air, and where server farms behind our screens can run the necessary compression algorithms to apply filters to our photos as we upload them.

The novelty of this technology has been intoxicating, and myself I’ve found it fascinating enough to both want to understand it and participate within it professionally. But after 20 years, the novelty is beginning to wear off; and the inevitable transitions evident fifteen years ago have come to pass.

Physically, publishing on paper is in decline (in some cases rightfully) whereas digital publishing is established and growing. This echoes the transition between Mediaeval manuscript book propagation in favour of the printed book, and if Gutenberg’s invention in 1452 echoes Berner-Lee’s of 1989, we are in the equivalent of the 1470s, by which time Guttenberg’s press had spread to France, Italy, England, and Poland.

The model of book-idea production has lasted since that time, until our era when we’ve figured out how to fluidly use a two-dimensional surface through the manipulation of electricity and light.

I spent a week last July helping put the finishing touches on a corporate website to publish a company’s annual report. Twenty years ago, the report would have been a booklet and print designers and typesetters would have been hired to produce it. As the novelty of working with computers is wearing off, and as our economy has shifted to incorporate them in our offices and studios, it is now obvious that this digital economy is essentially that of publishing: websites, apps and ebooks. It is supported, as it always has been, by ad money. And the big sites like eBay and Amazon represent the Platinum Age of mail-order. I grew up with Roch Carrier’s famous short story about a hockey sweater ordered from the Eaton’s catalogue. A future generation will probably know an equivalent that replaces Eaton’s with Amazon.

As I worked during that July week it occurred to me that in 200 years I would not be described as front-end developer, nor a web-designer, but perhaps just a publisher, in the same way that Benjamin Franklin is described as a printer, not a pamphlet designer, nor a typesetter. “To earn money he worked in publishing” – may be all that need to be said, for by then, publishing will be digital by default, and will have been for two-hundred years.

The Prosthetic Hallucination

Last week at FITC Screens 13 I got to try Google Glass for the first time. Tom Emrich was there as part of the Xtreme Labs Lounge and I tried his device for about five minutes, long enough for him to show me how to use it and go through some basic commands.

The screen was a little out of focus, but it wasn’t important to me that it be perfectly fitted and adjusted. I took a picture, swiped, tapped, looked at the New York Times’ app, and had it read it to me.

Here is a mock-up I made to record the general impression:

glass-expm-01

The rectangle was smaller than I expected, and the fact that it was back-lit / transparent-black gave it a bit of a purple, out of focus sheen. It’s a see through screen hovering at your eyebrow, and I was thinking of this later when I tweeted:

I wrote that while sitting in Mike DiGiovanni’s Google Glass presentation, and watching him onstage I now understood the gestural language he was presenting: not that of someone with a neurological disorder, unable to focus on what’s in front of him, with eyes rolling upward, but someone who was experiencing a prosthetic hallucination in the right corner of his visual field.

I used the word ‘sociopathic’ specifically: a social pathology, that is a social illness, where one behaves in a manner that is either offensive or unfriendly and unsociable.

Human interaction requires at least two people, but Glass is not a device meant for two people. It’s an internalized, private experience. When you’re wearing one, you are meant to forget that it’s on, in the same way that traditional corrective eyeglasses become forgettable to the wearer.

All the pictures we’ve seen are of other people wearing glass but of course this is because of how difficult it is to show the subjective experience, which is really what the product offers.

ggl-img-glass

Google Glass is a beta product, and is the technological equivalent of 1990s cell phones with retractable antennas. In the 90s, owning a cell phone was a little offensive, because it signaled that you were either a busy-body big-shot or you were narcissistic enough to think that you were as important. (I remember telling someone in 1999 that I didn’t want a cell phone because I wouldn’t want to be bothered when I was away from home).

However, the utility of a cell phone soon meant that by 2005, almost everyone had one. By 2007, the year Apple released the iPhone, the executives and managers of the world were already carrying Blackberries and other email-capable cellphones, and I was used to seeing these people staring at their little machines while in line for coffee. It occurred to me then that the Blackberry was the wand of the overclass, and I wondered what their jobs were that they had to be checking email while in line. (At the time I carried a basic Nokia).

Now, people everywhere can be seen looking down at something they’re holding in their hands. This is such a common sight that you can find examples on Google Streetview:

For this argument I’ll refer to this posture – or more specifically this behaviour, as “digital attention behaviour”.

In 2007, in line at coffee shops, the future wasn’t yet evenly distributed, but now this digital attention behaviour has spread wide and become normalized.

Part of the normalization is that looking at a rectangle displaying digital messages isn’t that much different than looking at a pad of paper. We were already used to seeing people read things silently, which in itself was a revolution centuries ago, when reading was usually done aloud.

The rolling of eyes may eventually become a normalized digital attention behaviour, but right now, absent the even distribution allowing the rest of us to relate, it still looks strange and offensive.

snl-glass

Unintentionally, Google Glass manifests the Western narcissistic ego, where private experience on public display happens without care for how it affects others. The selfishness of Glass is expressed when the Other cannot tell if a picture is being taken or if the time is being checked. With a smartphone, a glance can tell you if the person is emailing, texting, web browsing, or playing a video game. The information leaks, and this information is valuable in contextualizing our person-to-person interaction.

Rendered completely private, this interferes with our Theory of Mind, our ability to imagine what others are doing and be able to relate to them. We can’t empathize without sufficient contextual clues. Inasmuch as Glass is a prosthetic for hallucination, it may also be a prosthetic for autism.

Having said all this …

I am nevertheless excited by the idea of Glass as both a prototype and an attempt to get us away from current digital attention behaviour, so that we can benefit from the data cloud while also being able to interact with one another as we did in the past. The irony that Glass is at present such a private experience that it interferes with human-to-human interaction, and is one of the bugs that needs to be resolved.

I like to think of Glass a pathfinder project to get us to causal augmented reality, via “smart”-eyeglasses, contact lenses, and/or eventually implants, such as described in Alastair Reynold’s Poseidon’s Children novels, the second of which (On The Steel Breeze) has this scene:

“Wait, I’m hearing from Imris again.” Her face assumed the slack composure of aug trance, as if someone had just snipped all the nerves under her skin.

In that world of implant-enabled augmented reality, an aug trance is something everyone can relate to, and fits into everyone’s Theory of Mind. It is not disturbing to see, and is an understood appearance.

Having said all this, I suspect that a product like Glass will be successful. Again, its current design is reminiscent of the first cell phones. We know from the movies that portable radio-phones were available during World War II.

200px-Portable_radio_SCR536

Original 1960s Star Trek communicators were more skeuomorphic of the walkie-talkie than a phone, but when Motorola marketed the StarTAC phone in 1996 the reference to the fiction was obvious.

In the 2009 Star Trek movie, Simon Pegg as Scotty is seen wearing an eyepiece:

scotty-eyep

And in 1991, Star Trek The Next Generation featured holographic eyewear which took over the minds of the crew:

thegame015

Which exemplify that the idea of a heads-up-display is an old one, but Google decided to build it, tether it to an Android phone, and begin to market it. I don’t doubt something like it will eventually be successful.

What is especially interesting is how such a simple idea and device turns out to have complicated social side effects, but these side-effects would never have become apparent if Google hadn’t taken the chance to implement this old idea to begin with.

Reflections on a Subway Sandwich

Twenty years ago in September 1993, I was a fresh-faced frosh at Saint Mary’s University in Halifax Nova Scotia. I had visited Halifax on weekend family trips while growing up in the province, but this was the first month of what turned out out be a six year experience of living there.

With some free time, I took a walk downtown. The path tended to be: Inglis to Tower Rd to South Park, to Spring Garden Road. From here, I ended up walking to Barrington St.

I remember protests regarding Clayoquot Sound around this time, being held across the street from the library at Spring Garden and Grafton but I’m not sure if they were visible on this particular walk, wherein I found myself further down into the historic properties.

I found a Subway sandwich shop, and stopped there for lunch, and recall being surprised when I was offered both mustard and mayo as a topping, a combination of which I had never encountered before.

Unbeknown to me, I had wandered onto the grounds of NSCAD, the art school where I would begin classes three years later. Occasionally while at NSCAD, I would look down from the library at Subway at the intersection and recall that first walk when I began to discover my new city. For that reason I took its picture.

subway-hfx

The Subway, as seen from the NSCAD library in 1999

Eight years later I began to be interested in computer programming and the web. Using books I began to figure out how to build web pages, and I was reading Slashdot everyday. There was also a website called NewsToday® (later rebranded to QBN) which aggregated news items of interest to designers, and if I remember correctly, it was through that that I found YoungPup.net, where Aaron Boodeman (youngpup) had posted “the best way” to generate a pop-up window in Javascript. (I found the posting with the WayBackMachine, timestamped Sept 19 2002). From what I recall through his blog, through a link or a reference, I learned about Aaron Straupe Cope. Through his posted online resume, I learned that we shared NSCAD as an alma mater.

If anything I studied painting but I am mostly part of that generation for whom everything changed, and who dropped everything they were previously doing, when the web came along.” – Aaron Straup Cope (Head of Internet Typing at the Smithsonian Cooper-Hewitt National Design Museum), in his post Quantified Selfies

While I was eating my sandwich twenty years ago, he was probably in the building next door taking Foundation courses. His online resume also tells me that he was around during my first year, graduating at the end of term.

Twenty years ago the future both lay before us in a tapestry of September sunshine, but just as the future of twenty years from now is being invisibly incubated, nothing was then evident. It was the first year of the Clinton Administration and Jean Chretien’s Liberals were about to win the general election. The 90s were effectively beginning.

When I first started reading Aaron’s blog about ten years ago, he was living in Montreal. Later through his blog I learned he was in Vancouver, and later still, he was in San Francisco. I imagined CDs bought at Sam the Record Man on Barrington St that may have accompanied these travels. Reading his blog, I understood he was at Yahoo!, working on a site called Flickr.

Seventeen months ago, the Google Streetview car captured the corner as it then appeared. Here it is, posted to Flickr.

streetview

So I reflect now twenty years later on how a website like Flickr (which was big in its day and which now lingers on as an established presence) became part of the world that did away with typewriters for my generation and younger, and was in a way present at the GPS coordinates where I ate a Subway sandwich twenty years ago.

The WikiWars

I once heard it said that the internet was like Guttenberg’s printing press, and while revolutionary, Gutenberg’s printing press resulted in the religious wars of a century later. This was voiced as a warning against cyber-utopianism.

Twenty years after the World Wide Web app was released so that the public could use The Internet, we have begun to see our wars play out. The religious wars of the past led to the creation of the Nation State, after the Treaty of Westphalia. Our present wars are a symptom of the breakdown of that international system.

Freedom fries: this domino line begins with McDonalds.

Ben Ali’s son-in-law wants to open a McDonalds in Tunisia. He meets with the American ambassador.

The ambassador goes home and writes a report on the meeting, noting the family’s opulent wealth. “He owns a tiger that eats 6 chickens a day”.

Because privacy is old fashioned a Marine private smuggles out gigabytes of material on a re-recordable CD marked Lady Gaga, and provides it to cyber-utopian Wikileaks. They publish it along with the Guardian and The New York Times.

The Tunisia reports are emphasized by Al Jazeera, and they spread on Tunisians’ Facebook pages. Two out of ten people have Facebook accounts because privacy is old fashioned.

So a frustrated young man is spat on by a policewoman and sets himself on fire. Tunisians take to the streets, and inspire similar protests in Egypt, Libya, and Syria. Egypt’s dictator is kicked out. Libya has an 8 month civil war, before its dictator is finally killed. Syria’s civil war continues.

Egypt holds democratic elections but the poor vote for the wrong people, a party that wants to govern in an oppressive way. They protest again. The army comes in and removes the president. The world doesn’t want to call it a coup d’etat, because it was simply the army removing the person who should not have won the election. Democracy is only a good thing when the right people win. The people who voted for him are upset, so they protest, until the army clears the square by shooting at them. Some more people die.

Meanwhile, in Syria, chemical weapons are used. Chemical bombs are equivalent to First World War nukes, number two on the list of taboo armaments, a century old and “never to be used again”. They’ve nevertheless been manufactured.

Syria hasn’t signed the anti-chemical weapons treaties. A thousand people die.

President Obama had said that the use of chemical weapons would be a line that should not be crossed, lest he send in the World’s Most Powerful Military. The weapons were used.

To be continued.

_____________________________
Worth reading:
WikiHistory: Did the Leaks Inspire the Arab Spring?
Adam Curtis’ 2011 blog posting on Syria

An esoteric argument

“A figure in the Muslim Brotherhood [said] ‘It’s not logical,’ (is the way he put it), ‘it’s not logical for President Obama to be so concerned about a thousand people killed in a chemical weapons attack when a hundred thousand have been killed, have been slaughtered by Assad in the last two years.’ And basically people here Jeff do not accept this distinction that the President is trying to make between the use of chemical weapons and the wholesale killing of Syrian civilians by aerial bombardment and artillery. They see it as an esoteric argument about some international weapons convention treaty that just has no relevance to their lives.”

Me by Byron

2013-08-24

At the end of August my friend Byron Hodgins & I went to High Park and he painted me.

Retro Minimalism

From Tumblr

The argument made by Alfred Loos in Ornament & Crime a century ago (1908) was essentially racist: a European aesthete equated ornamentation with primitive barbarism. Eighty-four years later, art anthropologist Ellen Dissanayake pointed out how the West’s value of simplicity was unique; that people all over the world use ornamentation as an expression of their humanity (Homo Aestheticus; 1992). In other words, the Western/European strain of thought has derided ornament for a long time, but this is a rejection of what the rest of the world appreciates.

As architectural criitc Nikos Salingaros stated in a recent interview (speaking architecturally):

Ornament generates ordered information. It adds coherent information that is visual and thus immediately perceivable on a structure. Successful ornament does not stick something on top of form: instead it spontaneously creates smaller and smaller divisions out of the whole. Just like biological life, which is all about information: how to structure it, how to keep it together as the living body, and how to replicate it so that the genetic pattern is not lost forever. But without ornament, either there is no information, or the information is random, hence useless.

The loss of ornament is the loss of vital architectural information. Ever since that fateful moment in history, there is little life in architecture. Unornamented forms and spaces are dead, sterile, and insipid, defining a sort of cosmic “cold death”: an empty universe where no life can exist. But for a century, this empty state has been the desired aim of architects: to remove information from the built environment.

A century of thought by sophisticated individuals has resulted in Minimalism as an aesthetic trend, affecting the built environment, designed spaces, and designed objects. It is part of the story that includes not only Alfred Loos’ contribution, but also that of asceticism. Minimalism seems to be one of David Martin’s Christian ’time bombs’ that went off during the Industrial Revolution.

Martin’s argument is that Christianity (which we can say began as a cult in Roman occupied Judea) spread throughout the Roman world and for the past two thousand years has survived through weekly repetition of its repertoire of ideas. These ideas slowly transformed what was once Roman society — shepherding civilisation through Rome’s collapse, then preserving knowledge until Europe could restore itself in subsequent centuries. Along the way, Christian ideas have gone off like time bombs, such as human equality, and the abolition of slavery.


from CBC Ideas, The Myth of the Secular episode 2 (26:57-33:52)

Minimalism would seem to be a contemporary expression of the asceticism taught to the West through the Christian tradition, and thus the contemporary minimalist practitioner might see themselves as practicing a form of spiritual sophistication, through what they consider to be “good taste”.

However, minimalism is very future-orientated as well. In Ian MacLeod’s Song of Time (2008) the narrating character (reflecting from a future perspective of a century from now) says of our present early 21st Century:

There were so many things is that lost world. Our house overflowed with Dad’s old tapes, records and CDs, and Mum’s ornaments, and both of their books and magazines, and all of our musical impediments, and my and Leo’s many toys, which we never played with, but still regarded with totemic reverence.

This implies that the future world is largely Thing-less and decluttered. Of the minimalised future, consider how it is parsed by Lindsay Jensen in her essay on Oblivion:

“The flannel-wearing hoops-shooter is Jack Harper (Tom Cruise), a high tech drone repairman who lives in a futuristic compound (that more closely resembles a sterile medical lab that a cozy cottage) […] Despite the destruction of the Earth’s surface, Jack and Victoria’s home – in a tower high above – displays not a speck of dust or clutter, only gleaming chrome and glass. Even Victoria herself seems a piece of this decor, impeccably dressed in sculptural shift dresses … signifying Victoria as an integral element of this environment — serving the same semiotic function as her hyper-futuristic touchscreen computer, or the meals that appear as anonymous squares of surely nutrient-dense edibles served from individual vacuum-sealed pouches. These objects – of which she is one – loudly and obviously declare this as the future: a different, cold, and calculated environment in stark contrast with the relaxed authenticity of Jack’s cabin. The latter is a hideaway whose existence Jack keeps secret from Victoria. She is too much a part of one world to venture into the other one he has (re)-created.”

Oblivion in fact displays the temporality of contemporary expressions of Sophistication. On the one hand, we get the minamlised dust-free future. On the other, we get Jack’s cabin, his secret retro-world, filled with the archeology of Oblivion’s Earth. Here Jack wears a New York Yankees ball cap, checked shirt, and listens to vinyl records. Old, weather-warped books rest on rough-hewn shelves. The cabin world reflects our “Dream of the 1890s” and the other hipsterisms our present time – the Sophisticated Retronaut who has curated their life as if it were the decade between 1975-1985, with a vinyl record collection, gingam shirts, and the usual as displayed on Tumblr.

Sophistication seems to lie on the spectrum between the Retro Past or the Austere Future, and the display of its corresponding taste. For either one curates objects of “warmth” or those that are “cold”, while “sentimental” responds to “calculating”.

Consider again Nikos Salingros’ arguments about ornament: that it adds coherent information. As Ellen Dissanyake argued in her book, the display of ornamentation is widely regarded as human enhancement. Echoing work done by Michael O’Hanlon (published in 1989) Dissanayake wrote (p.102), “The Wahgi [of Papua New Guinea], reports O’Hanlon, do not consider adornment and display to be frivolous […] the Wahgi believe that an adorned person is more important and ‘real’ than an unadorned “natural” person, a belief totally at variance with contemporary Western ideas…” She goes on to cite many other examples, concluding with:

Concern for dress goes along with concern for one’s bearing and manner, and these reflect the self-control and civility that humans from the earliest times seem to have deliberately imposed on their ‘natural’ or animal inclinations and regarded as essential to harmonious social life.

While the examples at this part of her book focused on clothing and dress, they serve as scaffolds to extrapolate from: our concern for ornament is a human appetite, a way that we express our supra-animal minds. Dissanyake closes the chapter in question (“The Arts as Means of Enhancement”) by narrating a brief Western art history:

For the past two hundred years… the formality and artificiality that universally characterize civilized behaviour have been suspect. Wordsworth praised the beauty to be found in the natural rustic speech or ordinary people; since his time, poetry has moved further and further away from the characteristics that marked it as poetry for thousands of years [while] 20th Century Western artists have typically been concerned with making art more natural (using ordinary materials from daily life or depicting humble, trivial, or vulgar subjects) and showing that the natural, when regarded aesthetically, is really art. […] In this they both lead and follow, absorb and reflect postmodern Western society which is the apogee of the trend I have been describing where now the natural is elevated to the cultural, where nature and the natural viewed as rare and “special” are deliberately inserted into culture as something desired. I have pointed out that most cultures like shiny, new-looking, bright, and conspicuously artificial things. […] But we prefer the worn and the faded because they look natural and authentic.”

The Minimalised future is shiny and new and special, and thus in tune with our human natures. Yet it is also austere and cold, speaking to a sense of self-control and discipline, which is acquired … that is to say, civilized. Our flourishing Retro hipsterdom is the late 20th Century’s postmodern concern for authenticity spoken of by Dissanayake. But maybe it is also way for our culture to digest the records of the previous hundred years and decide what should be considered timeless, what should we formalize into the artificiality of culture which our human appetites desire.

(top image: Fuckyeahcooldesigns Tumblr)

King Baby of the Late 21st Century

The King Baby was born on Monday July 22nd. Today they announced his name was George.

Let us suppose future historians will be able to say that the Second Elizabethan Era lasted the second half of the 20th Century and a quarter of the 21st (a reign of seventy five years). Thus:

Elizabeth II 1952 – 2027 æ 101

As Elizabeth’s mother lived to be 101, this is quite feasible. Let us then be generous and assume all future monarchs will live to be 100.

Charles III 2027 – 2048 æ 79 – 100
William V 2048 – 2082 æ 66 – 100
George VII 2082 – 2113 æ 69 – 100

Note: Wikipdia states that Charles has considered reigning under the name George, which may be unlikely now that his grandson has been given the name. Nevertheless, were he to do so, presumably King Baby George would reign as either George VIII or follow his grandfather’s example and us another name.

A feudal, hyperconservative kind of society

MAN OF STEEL

What conditions would allow a sophisticated and civilized society that has space travel to turn inward and no longer see what it’s doing to itself?” he adds. “We likened it to a feudal, hyperconservative kind of society that no longer believes other planets were worth visiting and mothballed those fleets. Ancient, doddering old fools running society and paying no attention to science or more enlightened minds.” – Alex McDowell

(Sounds basically like the USA and Canada)

Desktops

These two images from National Geographic‘s Found Tumblr are currently my Desktop backgrounds:

tumblr_mm2t27ZFLa1s7f3fyo1_1280

NationalGeographic_1105658

Civilisation 2.0

“Did you ever hear of the 5.9 Kiloyear event?

“I thought not. It was an aridification episode, a great drying. Maybe it began in the oceans. It desiccated the Sahara; ended the Neolithic Subpluvial. Worldwide migration followed, forcing everyone to cram around river valleys from Central North Africa to the Nile Valley and start doing this thing we hadn’t done before, called civilization.

That’s when it really began: the emergence of state-led society, in the 4th millennium BC. Cities. Agriculture. Bureaucracy. And on the geologic timescale, that’s yesterday. Everything that’s followed, every moment of it from Hannibal to Apollo, it’s all just a consequence of that single forcing event. We got pushed to the riverbanks. We made cities. Invented paper and roads and the wheel. Built casinos on the Moon. […]

But this global climate shift, the Anthropocene warming – it’s just another forcing event, I think. Another trigger. We’re just so close to the start of it, we can’t really see the outcome yet. […]

“The warming was global, but Africa was one of the first places to really feel the impact of the changing weather patterns. The depopulation programmes, the forced migrations … we were in the absolute vanguard of all that. In some respects, it was the moment the Surveilled World drew its first hesitant breath. We saw the best and the worst of what we were capable of, Geoffrey. The devils in us, and our better angels. The devils, mostly. Out of that time of crisis grew the global surveillance network, the invisible, omniscient god that never tires of watching over us, never tires of keeping us from doing harm to one another. Oh, it had been there in pieces before that, but this was the first time we devolved absolute authority to the Mechanism. And you know what? It wasn’t the worst thing that ever happened to us. We’re all living in a totalitarian state, but for the most part it’s a benign, kindly dictatorship. It allows us to do most things except suffer accidents and commit crimes. And now the Surveilled World doesn’t even end at the edge of space. It’s a notion, a mode of existence, spreading out into the solar system, at the same rate as the human expansion front.”

From Blue Remembered Earth by Alastair Reynolds (p.150-151). The character Eunice, speaking in 2162, is explaining the development of the global Mechanism that watches over and protects the population. This is what I’ve been thinking about this week in light of the NSA revelations.

Bruce Sterling at SXSW 2013

I myself don’t go into bookstores very much now. They have become archaic, depressing places. […] How many bookstores close, as a direct ratio of hours spent with electronic devices?

I’m sure there’s some direct relationship there. And it’s not a dark conspiracy. I happen to be quite the Google Glass fan.

In fact, I’m even becoming something of a Sergey Brin fan. I never paid much attention to Sergey before, but after Google Glass, Sergey really interests me. He’s filling the aching hole, the grievous hole in our society left by the departure of Steve Jobs. With Jobs off the stage, Sergey’s becoming very Jobsian. He wears these cool suits now. He’s got much better taste in design than he did. He’s got these Google X Moonshot things going on, they’re insanely great, and so forth.

I hope Sergey’s not taking a lot of acid and living off vegetarian applesauce. But other than that, well, now we have this American tech visionary millionaire who’s a Russian emigre. It’s fantastic! There’s something very post-Cold-War, very genuinely twenty-first century about that. It’s super. Sergey’s like my favorite out of control, one-percenter, mogul guy, right now.

[…]

Since the financial panic of 2008, things have gotten worse across the board. The Austerity is a complete policy failure. It’s even worse then the Panic. We’re not surrounded by betterness in 2013. By practically every measure, nature is worse, culture is worse, governance is worse. The infrastructure is in visible decline. Business is worse. People are living in cardboard in Silicon Valley.

We don’t have even much to boast about in our fashion. Although you have lost weight. And I praise you for that, because I know it must have been hard.

We’re living in hard times, we’re not living in jolly boom dotcom times. And that’s why guys like Evgeny Morozov, who comes from the miserable country of Belarus, gets all jittery, and even fiercely aggressive, when he hears you talking about “technological solutionism.”

“There’s an app to make that all better.” Okay, a billion apps have been sold. Where’s the betterness? – Bruce Sterling’s keynote at SXSW 2013

Tomorrow

demain

(the ending of 2006’s Children of Men)

Krugman, “The Excel Depression”

So the Reinhart-Rogoff fiasco needs to be seen in the broader context of austerity mania: the obviously intense desire of policy makers, politicians and pundits across the Western world to turn their backs on the unemployed and instead use the economic crisis as an excuse to slash social programs.

What the Reinhart-Rogoff affair shows is the extent to which austerity has been sold on false pretenses. For three years, the turn to austerity has been presented not as a choice but as a necessity. Economic research, austerity advocates insisted, showed that terrible things happen once debt exceeds 90 percent of G.D.P. But “economic research” showed no such thing; a couple of economists made that assertion, while many others disagreed. Policy makers abandoned the unemployed and turned to austerity because they wanted to, not because they had to.

So will toppling Reinhart-Rogoff from its pedestal change anything? I’d like to think so. But I predict that the usual suspects will just find another dubious piece of economic analysis to canonize, and the depression will go on and on. – Paul Krugman The Excel Depression

“I wonder sometimes if Morozov’s disinformation campaign is a deliberate sabotage…”

I wonder sometimes if Morozov’s disinformation campaign is a deliberate sabotage, an attempt to discredit those who are actually working to achieve the participatory ideal that he claims to be protecting. […] I don’t mind Morozov’s petty mischaracterizations of my motives; it’s what he does to garnish attention and I make a convenient target.

-Tim O’Reilly responding to Annalee Newitz’s overview of Evgney Morozov’s attack piece

Easter 2013

2013-03-29

This drawing seems appropriate for Easter.

École

2013-03-28

This was my elementary school.

King Solomon Doodle

2013-03-27

A doodle from the end of January when I was listening to a podcast about King Solomon.

e15e58da88ca11e2826f22000a9f13e9_7

Mormon Beats

2013-03-03

The DJ booth is a repurposed pulpit.

That’s Dan Turner aka Sex Helmet at The Founatain on Dundas.

2013-02-28-

Screen shot 2013-03-08 at 2013-03-08 • 9.38.16 PM

The night of The Tempest

2013-02-26-prospero

One ends up drawing economical-line Prosperos while listening to The Tempest audio book

Segway Day

viewer-2

I’ve always wanted to try a Segway so today @owlparliament & I did the Segway tour at the Distillery District. It was super fun. I loved floating along on that thing.

Commerce Court

I watched the remade Total Recall over Christmas. This is Commerce Court in Toronto’s downtown core.

2012-12-24 21.37.53

2012-12-24 21.37.45

2012-12-24 21.37.27

Today I began work next door and after took a walk over to see it. Instead of an information kiosk, we have illuminated trees.

viewer-6

Baptism

viewer-4

The church in Bathurst New Brunswick where I was baptized. The baptism itself looked like this:

c08_1975_dsc00106

c08_1975_dsc00107

The priest was my father’s uncle, Noel Cormier (died in 2010) and with my mother are my maternal grandparents. My grandfather died in 1993, and my grandmother is still alive at age 100. It was for her 100th birthday that I was in New Brunswick when the church photo was taken.

Prada

2012-12-14_img_4645

Part of a larger project

New Mexico Drivers Licence 1998

2012-10-24_img_4391

Sam Beckett’s New Mexico Driver’s Licence, issued in 1995, as imagined in the 1993 series finale of Quantum Leap.

Pints in 1844

Tumblr tells me this is the earliest known photograph of men drinking beer. (Edinburgh Ale, 1844, by Hill & Adamson).

Pussy Riot FTW

I haven’t been that interested in the Pussy Riot trial, except to say going to jail has made their protest far more successful than it would have been if they’d been ignored. We’ve gotten used to ‘radicals’ doing something offensive and disappearing, but these girls performed a song, pissed off people enough to get arrested, forced Western journalists to compare their story to Soviet show trials, and ignited sympathetic protests around the world. When they get out of jail in two years they’ll be free-speech darlings and will probably have a widely successful global tour. If the point of their action was to highlight that Russia is intolerant of protest, then this whole story exemplifies that wonderfully.

However, Russia doesn’t care that Western people under the age of 40 who Tweet think this is outrageous. Reuters reports:

Valentina Ivanova, 60, a retired doctor, said outside the courtroom: “What they did showed disrespect towards everything, and towards believers first of all.”

A poll of Russians released by the independent Levada research group showed only 6 percent sympathized with the women and 51 percent found nothing good about them or felt irritation or hostility. The rest could not say or were indifferent.

Our Generation has no Chomsky?

I tend to think that our generation does in fact have such thinkers, only they’ve been hampered by structures designed to celebrate such already “brand name” established figures. Also, this generation’s thinkers are less likely to go through an academic publishing route, given the opportunities for self-publishing today.

The Mechanism

Hints of “the Mechanism” in Alaistair Reynold’s Blue Remembered Earth appear until it actually becomes a plot element, where we learn precisely what it is. Essentially, by 2162, people have been enhanced, and interact with the data cloud via retinal implants and augmented reality. There is a major political system called the United Aquatic Nations, where people live in under-ocean cities, swim a lot, and where some have even undergone surgeries to make them into living mermaids. These people still use buttons and screens, as we do, but it seems this is mostly a necessity of their lifestyle choice.

Because of the constant, internalized connection, people generate a massive amount of data which needs to be indexed (via posterity engines), while it is also being monitored. The Mechanism is the set of algorithms which constantly monitor the data stream, and intervene if signals indicate a certain action is underway, or about to be. When Geoffrey, a main character, gets so angry we wants to hit someone, the Mechanism recognizes this and strikes him with an instant debilitating headache. (p.283) The incident is logged, and he is then scheduled for a visit by a risk-assessment team to determine the seriousness of the matter.

It is known as The Surveilled World.

I recall being a child, going through a Catholic education, and realizing that according to the teaching, God could hear my thoughts. I felt exposed, my privacy violated, and embarrassed. Was there no respite from scrutiny?

Twenty-five years later, I visited the Cloisters, the reconstructed (and frankensteined) medieval complex in New York. Throughout we see little heads gazing down from sculpted elements – these essentially are the medieval version of our black domed cameras, a reminder to the monks of eight-hundred years ago that they were constantly being watched, by a security apparatus of angels.

It seems then, that we have some social need to construct surrogate parental oversight. That a society without watchers – a secularized society that doesn’t believe in spiritual spies and one without CCTV cameras (essentially the Western world for about a hundred and fifty years) cannot exist without engendering existential angst (as seemingly happened).

Speaking of the social upheavals of the mid 21st Century, the character Eunice describes its development:

The warming was global, but Africa was one of the first places to really feel the impact of the changing weather patterns. The depopulation programmes, the forced migrations … we were in the absolute vanguard of all that. In some respects, it was the moment the Surveilled World drew its first hesitant breath. We saw the best and worst of what we were capable of Geoffrey. The devils in us, and our better angels. The devils, mostly. Out of that time of crisis grew the global surveillance network, this invisible, omniscient god that never tires of watching over us, never tires of keeping us from doing harm to one another. Oh, it had been there in pieces before that, but this was the first time we devolved absolute authority to the Mechanism. And you know what? It wasn’t the worst thing that ever happened to us. We’re all living in a totalitarian state, but for the most part it’s a benign, kindly dictatorship. It allows us to do most things except suffer accidents and commit crimes. (p.150)

The quote continues:

“And now the surveilled world doesn’t even end at the edge of space. It’s a notion, a mode of existence, spreading out into the solar system at the same rate as the human expansion front. But these are still the early days. A century, what’s that? Do you think the effects of the 5.9 kilo year event only took a hundred years to be felt? These things play out over much longer timescales than that. Nearly 6000 years of one type of complex, highly organized society. Now a modal shift to something other. Complexity squared, or cubed. Where will we be in a thousand years, or six thousand?”

This is worth quoting in full since it hints at Alastair Reynold’s larger project: Blue Remembered Earth is merely the first novel of a planned trilogy, reported to span into the far future. I imagine the next book will take place centuries ahead, and be part of the answer to this speculation.

The referred to 5.9 kilo year event was a period of intense desertification occurring circa 3900 BCE which triggered worldwide migration to river valleys, from which emerged the first complex city-states. The character is suggesting that present climate-change is a similar event, which will drive us into new ways of living. In the book, one of these new ways is that of globalized surveillance, complete with a thought-control mechanism.

However, there is an area on the Moon that exists outside the Surveilled World, and Geoffry visits his sister Sunday there early on. They strolled through a market while Sunday explained social theories regarding crime as a necessity for innovation and social health. The chapter ends with Geoffrey reaching into his pocket for his hat, and finding it missing …

The hat, it began to dawn on him, had been stolen. The feeling of being a victim of crime was as novel and thrilling as being stopped in the street and kissed by a beautiful stranger. Things like that just didn’t happen back home. (p65)

The Mech then, is an interesting possibility about where we might be headed. The consequences of it are that, by 2162, there are no police forces and no jails. In the novel a news item is mentioned in passing about the demolition of the planet’s last jail, a facility in Mexico.

Book Trailers

Last week, the Cloud Atlas trailer was released and it has reportedly driven up sales of the book on Amazon.

It’s an interesting effect, given that books have begun to have trailers produced by their publishing houses.

Neil Stephenson’s 2009 Anathem had a trailer…

…as did my latest favorite book (of which I’ve been writing about lately), Alastair Reynolds’ Blue Remembered Earth:

While B.R.E.’s‘ helped me understand what the book entailed, I was convinced by the Amazon Kindle’s sample chapter.

(For that matter, I bought Cloud Atlas at the end of December when I saw the concept art on io9.com, and got hooked by the sample chapter as well).

I have no information with regard to the trailers produced by the publishing houses having any impact. However, if they wanted to produce 6 minute masterpieces like the Cloud Atlas trailer (a distillation of imagery from a reported near 3-hour movie) then the form would come into its own.

This unintended effect generated by the Cloud Atlas trailer may convince them to do just that.

I love the idea of the mini-film being constructed out of choice scenes, of the quality that suggests an excerpt from a larger cinematic work.

Kind of like Franceso Vezzoli’s 2005 art-video, Trailer for a Remake of Gore Vidal’s Caligula


(Gotta love the Gladiator soundtrack. Also, how has this been on YouTube for two years?)

Engines • This is Already Happening 2

I tweeted this last week, but I should note it here:

Bing, DuckDuckGo, Yahoo, and Google are Search Engines.

Wikipeadia is a Find Engine

I wrote something on Posterity Engines.

Today, all the talk is about “Search” and databases are being built up on ‘search behavior’. Google has a zeitgeist listing that tells us what people have been lookig for, and one result of this database is its prediction algorithm, which guesses what you might be searching for, or tells you what other keywords match a search phrase.

If the conversation shifted to ‘finding’, what then? We have Yahoo! Answers and Wikipedia, and all the other websites in the world. Google’s dominance began with the quality of their ‘finds’ – the websites they suggested best matched your search.

If we shifted to analyzing ‘find behavior’, we would begin to build up a database of what sites we’re being accessed most often … and yes, this is already happening, and essentially drives Google’s algorithms. The site at the top is most likely to be the one you’ll want because others have chosen it as well.

Essentially, I find the use of the word ‘engine’ to name these processes of indexing databases curious, and found it especially interesting when coupled with the word ‘posterity’. It began to make me think about the data we are creating, and how it might be archived, accessed, and named. We are currently living under a ‘search’ paradigm but the future will inevitably complicate this, until ‘search’ will no longer be an adequate word.

Posterity Engines • This is Already Happening 1

One of the more interesting reconceptualizations I’ve come across lately is that of a “posterity engine” which is in Alastair Reynolds’ latest novel, Blue Remembered Earth. I have the Kindle edition which allows me some quick and easy textual analysis; the term appears four times thus:

1. “Across a life’s worth of captured responses, data gathered by posterity engines, there would be ample instances of conversational situations similar to this one…”

2. “…[it was out there somewhere in her] documented life – either in the public record or captured in some private recording snared by the family’s posterity engines.”

3.”It doesn’t know anything that isn’t in our archives, anything that wasn’t caught by the posterity engines…”

4. “He was old enough not to have a past fixed in place by the Mech, or posterity engines…”

The context of these sentences imply something not much more complicated than a contemporary search engine. The novel is set in 2162, that is 150 years from the present & publication date (2012).

The phrasing, “captured”, “snared”, “caught” speaks of today’s search engine crawl – crawling across a site, building up a database of links, content and keywords. At some point in our future, our terrabytes will be warehoused and crawled by personal search engines that will be indexed for our future uses – that is for posterity.

This is already happening. We just don’t label these processes with ‘posterity’. Our Apple computers are already running Spotlight crawls that index our local storage, and Time Machine is snapshotting our hardrives. Windows has an equivalent Start Menu search bar.

Imagine than the contemporary as laughably quaint, and imagine five future generations worth of personal petabytes stored somewhere (a central core server per home?) that requires contemporary Google-grade search to make useful.

I’m reminded of the fact that when Google began in 1998, its storage capacity was 350 GB. An off-the shelf MacBook Pro could have run Google in the late 1990s.

The Dark Knight Rises

I saw the The Dark Knight as a Monday matinee during our August long weekend in 2008. By that time, it had been out for two weeks and had already generated a lot of buzz. It seemed everyone was talking about and praising its greatness. I did not go as a fanboy of Christopher Nolan nor of Batman, but to merely catch up, and see the sequel to Batman Begins, which I learned about late during its 2005 theatrical run and almost missed.

I liked Nolan’s Inception, and seeing that movie cemented into my mind the idea that I like his films – tone, atmosphere, cinematography. However, was that atmosphere in The Dark Knight Rises? I know in the future I’ll have this film on my system and I’ll put it on as working wallpaper … or will I? Did it have that slow-burn quality against a rich backdrop and wonderful Hans Zimmer soundtrack? Yes, the Zimmer soundtrack delivered, and yes Bane was menacing, but I feel (at this point) that the only part of the film that lived up the hype was the prologue, which had been shown in December, and which made my jaw drop the first time I saw it as a blurry pirated internet clip.

Even though a lot was familar from the trailers etc, I went in with questions about the uprising, which were straightforward: a gang of thugs, the release of prisoners, the cops held hostage.

The whole “city under seige for five months” plot at the end didn’t work, as it was unbeliveable. I think in post-9/11 it would have been a lot more panicked, and the coercion of getting the military to guard the one bridge off the island would never have worked.

I did find Catwoman’s motivations interesting: she wants software that can erase her from all the world’s databases. “Collated, analysed, what we do sticks”. This concern of her seems slightly ahead of the time but only by a week or two. That’s a plot line that will make more sense as time goes on when this movie is just another file in a database we can download with our cloud accounts.

All & all, it seems too soon to judge TDKR. As a stand-alone, it’s weak. However, in the future when we can play the series back-to-back on our own time, when a teenager can spend three nights watching the movies on their tablet before bed, then its failings and successes will be clarified. Perhaps its tone has nuances that we can’t pick up yet, but that will become obvious later.

New Design

Beginning in November of last year (2011) I began to experiment with responsive design on a new blog site – the ultimate goal of which was to move my blog from timothycomeau.com/blog (where it had been for years) to it’s own dedicated url: timothycomeau.info.

Over the course of the winter (and while I was studying Web Design at Sheridan) I hacked away at it using it as a playground to try new ideas and further my understanding of WordPress, and especially Responsive Design.

Unfortunately, by the time I came to graduate, I was caught with a mangled site which was only half-developed for what I’d then intended: not only have it as a blog, but also an archive of my previous web content. The archive part wasn’t done, and unresolved.

At about the same time, I began to look into the sites on Themeforest.net, and in order to learn more about how they were built and functioned, bought one, which I put on the site in May. I was asking myself questions: how does this theme work and, does my content stand up to its design?

However, I soon grew frustrated with the implementation. That theme was designed for portfolios mostly, and I wanted the site to function as a blog primarily. It was evident that it should be scrapped.

About a month ago, I hacked together a very simple theme for my localhost WordPress Journal. As a Journal, the end that needed to be served was reading and so design wise it needed to emphasize and encourage that.

Essentially, I ported that design over into this one. I wanted something as simple & clean as words on a page. With the basic structure in place, I think I’ve reached a final version.

A human heart (2006-2091)

Any Human Heart: the story of a 20th century man, published in 2002, written in the late 1990s, and with the fictional lifespan of 1906-1991.

A Human Heart: the story of a 21st Century man, published in 2102, written in the late 2090s, and with a fictional lifespan of 2006-2091. What story then awaits today’s 6 year old?

Prometheus

Since I saw the first trailer at Christmas I’ve been looking forward to this day: then, the December early evening darkness, the loss of leaves, the cold weather. The trailer came to us as a Christmas gift, along with a trailer for Batman: The Dark Knight Rises (July 20) and The Hobbit (December). Then at the end of February, the first viral video, the TED talk, followed by the Weyland website(s) and more viral videos, of David and of Elizabeth Shaw. Now, finally, the movie is in North American theaters, having opened a week ago in the UK.

I saw it today in IMAX 3D and I was thankful that I had, being rewarded with glorious landscape shots for the first part of the film, and then the glorious space shots as we see the ship shrunken against the backdrop of both interstellar space and alien cloud. It lands in a clearing, facing a series of mounds, which contain the sculpted head we’ve seen throughout. The science team investigates, runs into problems, everyone dies, but in the end Shaw and the head of David the Robot survive, and take off in one of the other alien ships (associated with the other mounds) heading to the stars and presumably the Engineer’s home world.

Some s-f movies (and tv shows) can be dignifying: you leave their world feeling infused by the narrative of a mythology, a feeling undoubtedly behind the ancient myths. Sometimes, stories can animate the imagination in such a way as to give a sense of meaning and purpose. I recognize this as real, but also a trick – an illusion (or, a mental illusion, a delusion) that has something to do with how our brains are wired. Just as certain patterns can trick our visual sense, certain narrative patterns can trick our ‘meaning sense’. All of religious history is probably a side effect of such games. Now, we play these tricks for entertainment, using them for movies and television shows.

So it isn’t so much pretense as actuality when the films makers talk of creating a new ‘myth’. Prometheus the 2012 film is a new myth, taking for its name an old myth, and taking for its back story a successful monster film (set within a 20th Century space-age context) directed by Riddley Scott 33 years ago. Who were the ‘space jockies’ of that film? We now know they were Engineers, who seeded Earth through sacrifice millennia ago (or at least it is implied, as that scene is not dated). The Engineers play with genetic technology: our sacrificer drinks a concoction that causes him to disintegrate, in the process converting his cellular structure into a virus, or merely genetic fragments: he falls into the primordial waters and thus the human DNA matrix has been introduced, to emerge out of mammalian primates later on.

The story of the ancient astronaut is compelling, I’ll admit. Four years ago I attended a Charles Darwin exhibition at the ROM, and was struck at the end by the display of skulls. Even though I’d studied physical anthropology in university, and even though I was familiar with the scientific narrative, to see all the skulls together made an impression that something is missing in the genenomic treeline. One can see how Homo Erectus is a form of Homo Neanderthalensis, all have a similar shape, similar brow-ridge, all are evidently part of a evolutionary story. But the outlier is the gracile Homo Sapiens Sapiens, all smooth boned, high forwarded, small chined. Perhaps something did intervene to make the brow ridges disappear, to make us more graceful.

Prometheus leaves the story open for a sequel: presumably in the next movie Shaw finds the Engineer’s home world and some more story elements are revealed, and the third movie will probably be an alien invasion flick set on Earth, post post … perhaps the Engineers are the reason Earth is destroyed by the end of Alien3

I didn’t feel dignified leaving the theater, but rather diminished. My humanity cheapened, the delusion only playing the depressing trick of making our Creators seem malevolent. Perhaps the overall implication is that we’re some kind of livestock to incubate the biological weapon of the xenomorphs.

(In reality, the story will probably turn out that sapient life evolved out of the dinosaurs and colonized parts of the solar system either before 65 million years ago, or during a time between (human beings comparatively have a 2 million year timeline, so there’s room in history for this), and the so called Greys are of the oceans of Europa, and for some reason had fiddled with our genome in the past -has has been suggested by The X-Files tv show of the 1990s, whose last episode told that the aliens were coming back on Dec 21 2012. In fact, Prometheus seems to owe much to The X-Files, in as much as both use a life-force of “Black Oil”).

Prometheus is also a generational parable: Vickers wants her father (Weyland) to die so that she can take over (Vickers being his daughter was pointless otherwise), but this model is also that of the whole: the children of the gods (humanity) want their parents (The Engineers) to die so that we … can take over the universe? And here an echo of the Promethean 1.0 myth: the God P give man technology and his fellow Gods are angry and inflict their famous punishment, because they know that with that technology man will one day challenge them for supremacy.

This tale echoes in a summer of student protest in Montreal.

Websites of the 2080s

On June 8th, Ridley Scott’s Prometheus will premiere in North America before going on to exist as a download or digital file encoded on a disc. Since February there has existed an online marketing campaign consisting of videos and websites, which have prompted me to consider this daily technology as it is now, and how it could be by the time the movie is set, some eighty years from now (reportedly 2093).

By that time we will have resolved a lot of technical issues. Formats will be even more standardized, maybe resembling something like magazines and how templated they are.

Also, I imagine that some other technology will have subsumed the html/css/javascript trinity that is currently behind all sophisticated websites.

HTML is merely a collection of bracketed tags inserted into text files. Barring the development of quantum computers, or quantum/classical hybrids (which Kim Stanley Robinson calls ‘qubes’ in his latest novel, 2312), text files will remain what they are today. HTML as a collection of tags need not necessarily go away … when I began implementing my Journal as a localhost WordPress database, I did so with the understanding that my Journal as a collection of Word .docs was likely to be unreadable in 20 years, whereas HTML was probably future proof.

CSS is another form of text file, but it is already giving was to LESS/SASS as an interface to writing it.

Javascript has exploded as a programing language – I remember when people used to write “… incase people have JavaScript turned off” … and why would they have it turned off? Early on there were security issues. This all seems to be ancient history and Javascript has become a necessity, making websites seem like something belonging to a computer. (That is, an interactive publication rather than a digitized magazine). Javascript has advanced so much in the past five years that Flash is definitely on its way out as a web-interface medium.

The idea of something replacing HTML/CSS/Javascript in ten years (2022) is unrealistic. However, by 2022, we may (as we are now seeing with LESS or SASS) have the hints of something else, with working groups considering the re-invention of the technical language of the Cloud.

By 2042 then, we may have something else. Browsers will still be able to read a webpage from our era by piping the text files through a deprecated renderer, or some form of built in emulator (something like what OS X began using with the introduction of Roseta).

In the past month, I’ve dived into exploring WordPress themes and understanding the possibilities offered by WordPress as a CMS. I’ve used it as a blogging platform for five years, but only in the past six months have I begun to understand its use and potential to drive contemporary websites. The technical sophistication offered by off-the-shelf themes I found frankly stunning, and it is this model I foresee going forward. However, it is this very complicated collection (the WordPress backend remains a mess) that I imagine will be stripped down and simplified, so that by the 2080s, the database to text-file interface will be streamlined that there will be nothing complicated about it.

All of this inter-relation could be integrated into one backend coding interface, and we’ll have something like this eventually.

What is this new form of spam?

Both today and I week ago I got these strange emails from women with M & J initials which seem to come from ambitious young writers. Je ne comprends pas.

I suspect it’s some new kind of bot, & perhaps these emails are an intelligence test? AKA let’s see how many people we can fool into responding.

(I admit that I responded a week ago).

Hypercard April 1997

I recently completed the Web Design Program at Sheridan College, and on the night of our grad show remembered the Hypercard project I did as part of “Introduction to Computers” at NSCAD back in the spring of 1997. Given that Hypercard was in many way a precursor to the web, I wanted to revisit this project as a document of my proto-web work.

I recall doing this very last minute, and at the time I was listening to a lot of Beethoven, so the project was a quick walk through complete with a sound sample of Beethoven’s 5th Symphony taken from the cassette I removed from my Walkman.

The teacher gave me an A.

I recovered this project on 15 May 2012 using a USB diskette reader and the BasiliskII Mac emulator.The cards are presented in a looping sequence:

1) Ludwig van
2) The Skull
3) The Hand
4) The Ear
5) Deafness
6) [back to Ludwig Van and Play his 5th]

Grayscale

  • 000000
  • 010101
  • 020202
  • 030303
  • 040404
  • 050505
  • 060606
  • 070707
  • 080808
  • 090909
  • 101010
  • 111111
  • 121212
  • 131313
  • 141414
  • 151515
  • 161616
  • 171717
  • 181818
  • 191919
  • 202020
  • 212121
  • #2e3137

On Black

<div style="width: 200px; height: 200px; position: relative; z-index: 1; background: black; clear: both;"></div>

<div style="width: 100px; height: 100px; position: absolute; z-index: 2; background-color: #00fff0; margin-top:50px; margin-left:50px;"></div>

Medium Specificity

Lucian Freud was reported to have called Leonardo da Vinci a terrible painter, which on the face of it seems old man’s contrarian fun. But it’s not inexplicable.

In Da Vinci’s time, paintings were moving away from Mediaeval stylization toward what we’d consider ‘hand made photographs’. Artists of the time wanted to depict retinal reality, and Da Vinci was the master at this.

Vasari wrote that one could almost perceive the pulse in Mona Lisa‘s neck, an effect which really isn’t unbelievable. In the Leonardo Live broadcast I saw this past week, La Belle Ferroniere appeared to breath, which I attribute to Leonardo’s sufmato, where the softness of the edges echo the effects I once observed in a Rothko – because the edges have no definable boundary, an optical illusion of movement is created, so the Rothko seemed to pulse, and the Da Vinci portrait seems to breath.

Lucian Freud on the other hand, was a master of medium specificity, the modernist mantra promoted by Clement Greenberg in the mid 20th Century. His critique of Da Vinci was precisely from this point of view: Leonardo sucks at painting because his paintings don’t look like paintings. For Leonardo and his contemporaries, this was a success. For the standards of the late 20th Century, it is a failure.

For me, the best example of the medium specificity ethos can be found in Charles Dicken’s 1854 novel, Hard Times:

‘Would you paper a room with representations of horses?’ [asks the government bureaucrat addressing Mr Gradgrind’s school]. “I’ll explain to you why you wouldn’t paper a room with representations of horses. Do you ever see horses walking up and down the sides of rooms in reality – in fact? Of course no.

Why, then, you are not to see anywhere, what you don’t have in fact. What is called Taste, is only another name for Fact.

This is a new principle, a discovery, a great discovery.

You are to be in all things regulated and governed by fact. We hope to have, before too long, a board of fact, composed of commissioners of fact, who will force the people to be a people of Fact, and nothing but fact. You must disregard the word Fancy altogether. You have nothing to do with it. You are not to have, in any object or use of ornament, what would be a contradiction in fact. You don’t walk up flowers in fact; you cannot be allowed to wall upon flowers in carpets. You don’t find that foreign birds and butterflies come and perch upon your crockery; you cannot be permitted to paint foreign birds and butterflies upon your crockery. You never meet with quadrupeds going up and down walls; you must not have quadrupeds represented on walls. You must use, for these purposes, combinations and modifications (in primary colours) of mathematical figures which are suspceptile of proof and demonstration. This is a new discovery. This is fact. This is taste.”

This mid-19th Century anti-imagery disposition is in fact an echo of the ancient prohibition against images that is found in The Bible. That prohibition reasserted itself during the Iconoclastic years during the Catholic and Protestant split, and is also found within the tenants of Islam, from which this official’s prescription may be a parody: in banning representation, Islamic arts developed geometric pattern to a degree we find astonishing.

A century after Dickens’ words were written, it had become the dominant aesthetic ethos. Art historians tend to bring photography into the explanation, since photography was superior and easier to accomplish than a Da Vincian masterpiece. Painting was left to explore its possibilities as a coloured viscous media.

By our early 21st Century, we’ve left aside concerns that painting need to do anything except be a painting. Young people continue to take up brushes because painting is an interesting and fun thing to do, and occasionally wonderful things result.

1854

As noted, Dickens’ novel was published in the mid-1850s. This was a time when photography was just beginning, and the dominant aesthetic movements were Academic Classicism and Romanticism. The Pre-Raphaelite Brotherhood were active, who formed themselves around the idea that art before Raphael (aka Da Vinci) was superior to the work that came after him (the imitation of Michelangelo known as Mannerism).

Realism, as an art movement, was also happening during this time, and 1854 was the year Courbet painted his famous Bonjour Mons. Courbet. Realism, as described by Wikipedia:

attempt[ed] to depict subjects as they are considered to exist in third person objective reality, without embellishment or interpretation and “in accordance with secular, empirical rules.” As such, the approach inherently implies a belief that such reality is ontologically independent of man’s conceptual schemes, linguistic practices and beliefs, and thus can be known (or knowable) to the artist, who can in turn represent this ‘reality’ faithfully. As Ian Watt states, modern realism “begins from the position that truth can be discovered by the individual through the senses” and as such “it has its origins in Descartes and Locke, and received its first full formulation by Thomas Reid in the middle of the eighteenth century.

The Great Exhibition of 1851 had put on display a variety of consumerist products made by machines. Influenced by the Pre-Raphaelites, and the writings of Ruskin who championed them, the Arts & Crafts Movement began in the 1860s, led by William Morris. From The Arts and Crafts Wikipedia page:

The Arts and Crafts style was partly a reaction against the style of many of the items shown in the Great Exhibition of 1851, which were ornate, artificial and ignored the qualities of the materials used. The art historian Nikolaus Pevsner has said that exhibits in the Great Exhibition showed “ignorance of that basic need in creating patterns, the integrity of the surface” and “vulgarity in detail”.[25] Design reform began with the organizers of the Exhibition itself, Henry Cole (1808–1882), Owen Jones (1809–1874), Matthew Digby Wyatt (1820–1877) and Richard Redgrave (1804–1888). Jones, for example, declared that “Ornament … must be secondary to the thing decorated”, that there must be “fitness in the ornament to the thing ornamented”, and that wallpapers and carpets must not have any patterns “suggestive of anything but a level or plain”. These ideas were adopted by William Morris. Where a fabric or wallpaper in the Great Exhibition might be decorated with a natural motif made to look as real as possible, a Morris & Co. wallpaper, like the Artichoke design illustrated (right), would use a flat and simplified natural motif. In order to express the beauty of craft, some products were deliberately left slightly unfinished, resulting in a certain rustic and robust effect.

In 1908, Adolf Loos published his (in)famous essay ‘Ornament and Crime‘ (trans to Eng: 1913), which argued that ornament was a waste of energy, in addition to applying racist and moralistic interpretations (equating the tattoos of the South Pacific natives with primitive barbarism). As Wikipedia notes, this essay is an historical marker as a reaction to ornamental style of Art Nouveau:

In this essay, he explored the idea that the progress of culture is associated with the deletion of ornament from everyday objects, and that it was therefore a crime to force craftsmen or builders to waste their time on ornamentation that served to hasten the time when an object would become obsolete. Perhaps surprisingly, Loos’s own architectural work is often elaborately decorated. The visual distinction is not between complicated and simple, but between “organic” and superfluous decoration. He prefigures the Brutalist movement that spreads from the 1950s to the mid 1970s.
(Wikipedia: Adolf Loos)

This post was edited on 3 Nov 2013 for clarity

Impressive

I can’t believe somebody did this.

From a Tumblr. Work credited to Aubrey Longley-Cook.

Applying timeless page design principles to the web

This morning I found Alex Charchar’s page on ‘the secret canon & page harmony’ as presented in the past by Jan Tschichold.

Using the Van de Graaf Canon, one divides the a page spread thus:

This is based on a 2:3 ratio page size. However, the spread makes the overall ratio 4:3.

Not coincidently, our monitor resolutions are based on a 4:3 ratio:

1024 = 4(256) = 1024
768 = 3(256) = 768

1280 = 4(320)= 1280
960 = 3(320) = 960

We can apply the Van de Graaf Canon to a 1024 x 768 webpage like this:

As Tschichold showed, the circle is indicative that height of the resulting textblock is equal to the width of the page, or in our case, ½ of the page (1024/2 = 512).

Tischold’s summarized Van De Graaf’s geometric method as the simplest way to create the outlines that were also used by medieval scribes, which all result in a text-block that fits within a 9 x 9 grid.

(animatd gif from Alex Charchar’s article)

Essentially, we can determine the size and position of a content block by taking any page size and dividing it up into a 9×9 grid.

A book spread which divides both pages into 9 columns results in a grid of 18 columns and 9 rows: however, our 18 rows can be condensed into a 9 x 9 without loss of effect. (Eighteen columns merely subdivides the otherwise 9 into halves).

The content block sits 1 column in, two columns above the bottom, and 1 column from the top.

Responsive Web Design

All of this would create a wonderful guideline for laying out the basics of a webpage were this still 2005 and 1024×768 had become the ne-plu-ultra after years of 800×600 CRT monitor resolution settings. Today (early 2012), a webpage needs to resolve on a variety of screens, from iPhones to giant monitors.

In order to have responsive content, it helps to code elements as percentages rather than specific pixels.

Our 1024 x 768 example creates a content block with an 87px top margin, a 166px bottom margin, and side margins of 112. The content box itself measures 800x 512 (the height is equal to half: 1024/2 = 512).

Coding anything in pixels though is unreliable since browser windows are never consistently sized across machines, and padding creates effects which makes pixel precision difficult.

What is needed is to achieve this 9×9 grid with percentages, so that this proportion can be rendered across resolutions.

In order to determine this, I coded a 9 row and 9 column table with a div overalyed using z-index. I fiddled with the css’ size and margin until I got something that matched the constraints.

The result is:

#container {
margin-left: 11%;
margin-right:11%;
height:67%;
width:78%;
margin-top:11%;
}

The page looks like this:

Demo

Hello Mammal Lovers!

This Friday (February 10) at the Drake Lab (1140 Queen St. West), in coordination with our current residency, “Open Cheese Office Grilled Songwriting Sandwich,” we’ll be cooking, hosting and celebrating the FOURTH annual Timothy Comeau Award, and you are invited! Swing by at 7pm or after to eat, be merry and find out who the winner is this year! Bring yourself, your friends, your drinks, your musical instruments, and the rest (an abundance of decadent cheese-variation sandwiches) will be provided.

The Timothy Comeau Award was created to recognize individuals who have shown exceptional support, interest and love for Mammalian. Recipients of the Timothy Comeau Award have been participants in many of our activities and have also offered insights, analysis and criticism. These are people without whom our events and existence would feel incomplete.

The 2010 winner of the Timothy Comeau Award was Sanjay Ratnan. Sanjay has been hanging out with MDR and participating in Mammalian events since he was a fetus. He continues to contribute his talents, ideas and super-star personality to Mammalian as a member of The Torontonians.

The 2009 winner of the Timothy Comeau Award was Kathleen Smith. Kathleen is not only a super Mammalian supporter, but thanks to her three nominations, MDR won the Toronto Arts Foundation’s Arts for Youth Award in 2009, a $15,000 cash prize!

The 2008 winner of the inaugural Timothy Comeau Award was Timothy Comeau. Timothy is a writer and cultural worker who has a couple of blogs including (curation.ca). He has been a constant supporter of the company, writing about our work, showing up to our events, goofing around and generating the kind of vibe that is essential to us. A Mammalian event without Timothy is a Mammalian event that’s happening on another continent.

Who will it be this year?!!

Hope to see you Friday!

MAMMALIAN DIVING REFLEX
Centre for Social Innovation

Fixing a Lightbox issue

I ran into a problem using Lightbox plugins on my blogs. I’m currently running modified versions of the Constellation Theme, which is full of HTML 5 goodness and styled to re-flow according to screensize (ie is mobile adaptable).

No matter what Lightbox plugin I’d been using since upgrading WordPress to the latest version (3.3), the overlay was showing a margin and an offset as exemplified below:

This is because of the way the overlay is codded to effect the < body > tag. Constellation styles the < HTML > tag in ways usually reserved for < body > so by making a change to the Lightbox Javascript file, one can correct this behavior.

In this case, I’m using Ulf Benjaminsson’s wp-jquery-lightbox plugin.

=== Fix ===

1. In `wp-content/plugins/wp-jquery-lightbox/jquery.lightbox.min.js`

2. search for “body”, and it’s found twice as a string in code like the following:

1) ....;a("body").append ....
2) ....;a("body").append ....

which correspond to lines 67 and 71 in the non-minified file.

Change these to HTML:

1) 1) ....;a("html").append ....
2) ....;a("html").append ....

Information is Quantum

Monday, August 8th, 2011 @ 11:00 AM
BA1160 (Bahen Building)
University of Toronto
40 Saint George Street

“INFORMATION IS QUANTUM”
How physics has helped us understand what information is and what can be
done with it.

Biography:

Charles H. Bennett received his Ph.D. from Harvard University in 1970 for molecular dynamics studies (computer simulation of molecular motion).

Following graduation, he worked at the Argonne Laboratory for two years. Since 1972, he has been at IBM where he has played a major role in elucidating the interconnections between physics and information. He developed a practical system of quantum cryptography in collaboration with Gilles Brassard and John Smolin. As well, he is also known for discovering “quantum teleportation”. Other research interests include algorithmic information theory and the physics of computation. Bennett is known as one of the founding fathers of quantum information theory.

Bennett is a Fellow of the American Physical Society and a member of the National Academy of Sciences. He was awarded the 2008 Harvey Prize by Technion and the 2006 Rank Prize in opto-electronics.

For more information, please go to http://cqiqc.physics.utoronto.ca/ .

From “In Search of Civilization”

“Dramatic growth in consumption has happened in the last thirty years: a period when the arts and the humanities have been unambitious in their efforts to guide and educate taste. The accumulated wisdom of humanity, concerning what is beautiful, interesting, fine or serious, was – to a large extent – left to one side at the precise time when the need for guidance was greatest, and when guidance was hardest to give, and so required maximum effort and confidence.

When one looks at celebrated figures of those worlds – such as Andy Warhol or today, Jeff Koons and Damien Hirst – and asks what does their art say to people about consuming, the answer is very little. I do not want to attack those particular individuals; they seem, amoung other, to be creations of a profoundly damaged culture that tells itself it is being clever and sophisticated and up to date for the wrong reasons. The cultural laurels – and a species of authority that goes with them – have been awarded in unfortunate directions. Mockery, irony and archness are not what we need.

While the works of these artists have gained amazing commercial success, they suggest a loss of purpose in the arts. Loss, that is, of a really central and powerful claim upon the education of taste: upon the sense of what is beautiful, gracious or attractive.

We have suffered an astonishing corruption of consciousness practised upon us by a decadent cultural elite. Think of the language of contemporary praise: a building is admired because it is ‘interesting’ – like the average newspaper column. The gap between ‘interesting’ and ‘glorious’ or ‘adorable’ is vast. An artist is praised for being ‘provocative’ – like someone bleating into a mobile phone on a crowded train. We are miles from ‘profound’,’tender’, ‘magnificent’.

All of this has come about because of a misreading of history. It has been supposed that the point of high culture – of the greatest imaginative and creative effort – is to unseat some fantasized ruling class who had to be provoked and distressed into change. But that is not the task of art or intelligence. Their real task is to shape and direct our longings, to show us what is noble and important. And this is not a task that requires any kind of cagey, elusive obscurity. The way forward here is to be more demanding, truthful and – at first – courageous. We have to forget the shifting patterns of fashion. Something is good because it is good, not because it was created yesterday or five hundred years ago.”
– John Armstrong

Internal Revenue Stamps

$600.xx Chili Sept 1 1866
Received of James Goldw
Six hundred dollars to
apply one contract for the
building of his house in Chili
Henry B Kimble

$450.xx Chili Sept 8 1866
Received of James Goldw
four hundred and fifty dollars to
to apply on contract for the
building of his house in Chili
Henry B Kimble

$200.xx Chili Sept 15 1866
Received of James Goldw
two hundred Dollars
to apply one contract for
the building of his house
in Chili
Henry B Kimble

In order to fund the Civil War, in July 1862 Congress created the United States’ first income tax as well as the Internal Revenue Service. Taxes were also collected on documents through the use of stamps.

Schedule A described the income tax and other taxes payable directly to the Office of Internal Revenue, including inheritance taxes; duties on carriages, yachts, and other luxury goods; and various duties on business activities. Schedule B described the taxes to be paid on documents, which required the use of adhesive stamps directly on these documents”. Revenue Stamps: Financing the Civil War (PDF)

This document, which I purcharsed for $3 at the Christie Antiques Show appears to show both levels of tax collecting: one on the contracts, and the other in the use of the 2 cent document stamps.

Chili appears to be an area outside of Rochester NY, and a search for Henry B Kimble of Rochester shows he may have been granted a patent in 1854 for a sash fastener.

(Ancestry.com)

Data Codes

[flv:http://timothycomeau.com.s3.amazonaws.com/blog/wp-content/uploads/2011/02/datacodes.flv 500 374]

From the episode Conundrum

A refuge for emotional cripples and patriotic fools

Petty Officer 2nd Class, CFR, Anastasia Dualla:

“My dad went crazy when I enlisted. He though the military was a joke. A refuge for emotional cripples and patriotic fools.”

Q: “But you signed up anyway…”

Dualla: “I guess I just wanted to believe in something”.

Battlestar Gallactica: Season 2: “Final Cut” (2005)

7 3 2 24
17 9 5 5
9 4 18 2
3 20 11 2

White-out & ink on a Cadbury wrapper.

The sketchbook tradition


Alex Livingston, Untitled Chromira Print on Dibond 2010 48in x 68in
from the Leo Kaman website

[/caption]

“The sketchbook tradition has pretty much died out,” he says. “The sketchbook offered a lot of portability, as you generated ideas on the go. I now travel with my drawing tablet and my laptop.”

– Alex Livingston, as reported by Peter Goddard, The Toronto Star, Jan 12 2011

Goddard in speaking with Livingston for his show (currently on at Leo Kamen Gallery in Toronto) explains that he’s currently using a Wacom tablet with his laptop, as opposed to paints and paper. I know myself, I looked into getting a Wacom tablet in 2009, but decided against it for the time being, as I still like using inks and brushes, and would prefer that tactility while image-making, as it’s just as easy to scan afterward as it is to create it directly through the computer.

What I wonder about though, is the measure of this shift. I came of age, and was inspired to be an artist, through the experience of 500 year old materials. Notebooks, manuscripts, paintings, and the older the better. I saw myself was working within that tradition, in effect creating things that would themselves be 500 years old one day. What then, is truly going on (what is the measure of this shift) when a professor at a prestigious art school says “the sketchbook tradition has pretty much died out”? If I were to ask, “will people in a century even understand paper?” is there an analogy which will help me understand what that experience will be like?

Art and Food

Since I was a child I’ve been fond of Jesus’ parable of feeding the spirit: that man cannot live by bread alone, but also requires the word of God. I think the reasons I’ve always appreciated this were because it was well explained to me by a teacher who had formerly been a priest, and it made sense to me in a manner that has remained true to my life as I’ve lived subsequently. That the spirit, or mind requires feeding seems self-evident.

This idea has been relevant to my interest in the arts, and I’ve also noticed over the years a personal preference for food metaphors. Food, after all, is a substance we ingest, we bring into ourselves, where it is transformed into something disgusting that comes out the other end of our bodies. This transformation is called digestion, and we understand through this process we remain alive through the derivation of nutrients, in effect becoming “what we eat”.

This physical digestion can mirror of that of the mind – we continually ingest, take into ourselves, ideas that enter our mind through conversation, reading, and general interaction. Our minds continually process the languages of our environment, be they symbolic, gestural, or spoken, and ‘digest’ them into some part of our worldview and subsequently some part of our sense of self.

Almost everyone alive is capable of feeding themselves in some way, even if they are not actually able to cook a meal. In that sense, we are all literate to the symbology of the gastronomic spectrum, all the way from food freshly killed in a hunt to the four-course meal of a fine restaurant. Along the spectrum are canned food we merely reheat, sandwiches, and fast food burgers. So-called special occasions require meals at the higher end of the spectrum, whereas quotidian meals after a long day can occur on the lower end.

Carr: Art is absurdly overrated by artists, which is understandable, but what is strange is that is absurdly overrated by everyone else.
Tzara: Because man cannot live by bread alone.
Carr: Yes, he can. It’s art he can’t live on.
-Tom Stoppard, Travesties (1975)

If the spectrum of food goes from the self-acquired meal to restaurants, on what spectrum does art lie? Why in effect, is my question being asked? Because Art is a strange and forever undefinable thing, precisely because it is a food of the mind, an intangible and a philosophically confused concept. As Wittgenstein sought to make clear a hundred years ago, some philosophical problems are merely problems of semantics, entanglements of concepts without a clear language. Art is such a thing: forever subject to pithy definitions which merely become mottoes for one of its clique camps. For the conceptualists art is something different than for the painters, and thus like God it is subject to much under its name, in a variety of churches under many flags.

Why is it we consider it normal for children to draw? And why do we find it usual that adults mostly do not draw? For that matter, why do we find it normal for children to play, and find it usual that most adults do not play, but those who do are honoured as actors? In keeping with my food theme, children do not eventually grow out of making food for themselves. Sure, there are people who ‘can’t cook’ but presumably this means they are reliant on heating up frozen dinners. Food making remains a part of our lives throughout, while art making is allowed to disappear.

But does it? If you can’t cook, that can be done for you – simply go to a restaurant or a soup kitchen. But art? One goes to a gallery, and hence a gallery is analogous to a restaurant. Or, like the ever-present unquestioned nature of food culture, we could say the dominance of created visual products we call tv shows and/or movies (even video games) are somehow reflective for our appetite for imagined products.

Galleries do not seem to think of themselves as restaurants for the spirit, offering menus of imagined products. However, if pressed, I think they would see the similarity between the haute cuisine chef and the international exhibiting artist.

Human beings took our animal need for palatable food … and turned it into chocolate souffles with salted caramel cream. We took our ability to co-operate as a social species … and turned it into craft circles and bowling leagues and the Metropolitan Museum of Art. We took our capacity to make and use tools … and turned it into the Apollo moon landing. We took our uniquely precise ability to communicate through language … and turned it into King Lear.

None of these things are necessary for survival and reproduction. That is exactly what makes them so splendid. When we take our basic evolutionary wiring and transform it into something far beyond any prosaic matters of survival and reproduction … that’s when humanity is at its best. That’s when we show ourselves to be capable of creating meaning and joy, for ourselves and for one another. That’s when we’re most uniquely human.” – Greta Christina, Sex and the Off-Label Use of Our Bodies| (My source)

Creating anything is a human thing to do: we take basic foods and we make meals, and we take sticks and make symbols. Everyday we manipulate a set number of symbols in composing text messages and emails, and to do so is to be part of our human community. A teenager unable to text (i.e. write) in today’s world would be one who is cut off from their community, and thus damaged. Being human is to be both a meal maker and an art maker, but importantly, I am using the word “art” in a generic creative sense of the word to encompass everything learned and extensive of the imagination, such as writing quotidian messages, or the dominating created world of pop culture.

Along the food-spectrum analogy, most everyone is capable of making a sandwich. Culturally, the creativity of everyday is not very advanced. Once we get beyond sandwich making, the understanding of these cultural worlds diverges: the fine restaurant has a place in our lives that a fine gallery doesn’t.

“Food” as a word is easily understood as something encompassing a long spectrum of things that are ultimately put in the mouth. But Art, through its semantic confusion, is not easily reduced as something “put somewhere”. It does not have an obvious end point, but is to be described as “experienced” or “felt” or “seen”.

What interests me is why the analogy of restaurants so easily breaks down, and why Art remains perceived as something privileged and removed, whereas restaurants and food culture are so thoroughly embedded. Why do galleries exist dependent on grants, whereas the idea of supporting a restaurant by grants is absurd? The easy answer is the physical need for food makes food culture obvious, but we do not speak of art as psychological need which would make its cultural contribution obvious as well. Also, the another obvious answer is that pop culture provides the feeding of the psychological/imaginative appetite so thoroughly that only those with “finer palates” seek out the higher forms in prestigious galleries. This is analogous to the “culture war” within Food: buying organic and local vs. fast & processed.

In the Art culture, we have fast and highly processed food as well. And just as a diet consisting entirely of highly processed food is extremely unhealthy, it is probably equally mentally unhealthy to be a digester of corporatized pop culture exclusively. Unfortunately, like a Big Mac filming a Whopper, reality television has begun to exploit the end products of generations of television: these terrible, stupid people who are not (in the old sense of the term) “cultured” precisely because they are instead “pop cultured” and thus comfortable with confessing to video diaries and being idiots on camera.

“I guess I used to think of myself as a lone agent, who made certain choices and established certain alliances with colleagues and friends,” he said. “Now, though, I see things differently. I believe we inherit a great river of knowledge, a flow of patterns coming from many sources. The information that comes from deep in the evolutionary past we call genetics. The information passed along from hundreds of years ago we call culture. The information passed along from decades ago we call family, and the information offered months ago we call education. But it is all information that flows through us. The brain is adapted to the river of knowledge and exists only as a creature in that river. Our thoughts are profoundly molded by this long historic flow, and none of us exists, self-made, in isolation from it. – David Brooks, Social Animal

To be alive is to participate in a food stream, and to be human is to participate in a knowledge stream. A human beings, we participate in a collectively created culture which subdivides into subcultures, two of which are food-related and art-related. Food culture is so healthy in its level of participation that people need to be careful around it, lest they become obese, while art culture is a muddied, confused and sclerotic thing, always being defended and dependent on social largess.

Clearly, the place of Art in our lives requires a rehabilitation—one which recognizes its place in a healthy and full life. Just as a diet consisting entirely of fast food is dangerous, so too is a mental life informed solely by corporatized products. However, this is not to be read as a defence of government grants, but simply to remind you that restaurants do not require support. If we include film, we may already have a healthy art-culture. If we consider art to be something solely related to galleries, we may ask why haute cuisine is not dependent on grants, or why the art experience away from commercialization insists on being free, when it is free food that one really requires.

Teenage Mutant Ninja Turtles

Using Google’s new Ngram viewer to plot the popularity of the Renaissance artists: Leonardo da Vinci, Michelangelo, Raphael and Donatello.


Overview 1450-2008


Overview of the 20th Century 1990-2008

What surprised me is the immense popularity of Raphael for most of the past five hundred years, which only really declined a century ago between 1900 and 1920. Michelangelo get a spike in popularity in the late 1950s for some reason, whereas Leonardo is enjoys a steady-state of interest.

Having been interested in Leonardo for twenty years, I would have thought there would have been more spikiness to his line: the discovery of his lost Madrid Codices seems to have caused a spike in popularity and publishing in the 1970s, mirrored by the past decade’s spike due to Dan Brown’s The Da Vinci Code.

Of course, if I turn off the smoothing, the graph immediately gets a lot spikier. Here is the overview for the past five hundred years:


‘Leonardo da Vinci’ plotted over 1500-2008

Leonardo clearly enjoyed the majority of his fame in the 18th Century

And here is the 20th Century:


‘Leonardo da Vinci’ 1900-2008

Which shows that despite my intuition, Da Vinci’s popularity declined between 1960 and 1980, and there was no real spike in the past ten years.

My time using Chrome

I stopped using Firefox last year, when I began using Chrome (through the development version Chromium) in September 2009. As it underwent rapid development versions on the Mac, I updated frequently and I began to taking periodic version snapshots until January of this year, shortly before Chrome went official for Mac.

2009-10-06 1:52pm

Chrome = 4.0.220.1
Chromium = 4.0.221.5 (27975)

2009-10-15 9:06pm

Chrome = 4.0.222.5
Chromium = 4.0.223.1 (29225)

2009-10-23 11:43am

Chrome = 4.0223.8
Chrome = 4.0223.11 (8:56pm)
Chromium = 4.0224.3 (29892)

2009-11-15 1:48pm

Chrome = 4.0.254.0
Chromium = 4.0.249.0 (32026)

2009-11-24 8:58pm

Chrome = 4.0.249.12
Chromium = 4.0.257.0 (32997)

2009-12-20 10:31pm

Chrome = 4.0.249.43
Chromium = 4.0.277.0 (35069)

2010-01-06 7:11pm

Chrome = 4.0.249.49
Chromium = 4.0.288.0 (35431)

2010-01-14 8:29pm

Chrome = 4.0.249.49 (35163)
Chromium = 4.0.299.0 (36242)

2010-01-24 10:37pm

Chrome = 4.0.249.49 (35163)
Chromium = 4.0.306.0 (36978)

2010-11-28 1:18pm

Chrome = 7.0.517.44
Chromium = 6.0.443.0 (50319)

2140s

In the late 2140s, people have a thing about masks. More later.

What does this mean?

When I first read this I thought it was a nice way of pointing out the dangers of an aristocracy – the exact thing the 18th Century Enlightenment thinkers made their reputation attacking. At that time, the awfulness of society was seen in part to be the result of the establishment being ill-educated and having been merely born into their positions of power.

I read this as saying:

The best argument exemplifying of an elitist-aristocracy is ‘you shouldn’t have to know something in order to be in charge of it’

or perhaps

By their example, “in favour of this” they show the limitations of thinking that people shouldn’t need to know something in order to run it.

But then again, is it a defence of elitism? His Goldsbie actually saying:

“The best argument for an elitist society is the example of those people who think they can run things without knowing anything about it. We should have an educated elite who know what they are doing.”

Je ne sais pas.

Santiago Sierra

Santiago Sierra and the Art World Politics of Rejection | Selby Drummond

It’s hard to imagine, given these parameters, a country from which Sierra would accept an award. And, with this in mind, even harder not to conclude that Spain virtually volunteered itself to go like a lamb to the slaughter. Conflating notions of artistic gesture and political protest, Sierra’s work has pretty much been sending Spain this same rejection letter since, like, 1999, in so many words. The artist has paid Chechen refugees minimum wage to remain hidden inside cardboard boxes in a gallery for long stretches (2000), Iraqi immigrants to stand docile while he sprays them with insulation foam (2004), prostitutes [whom he paid in heroin] for the privilege of publicly tattooing their backs (2000), and African immigrants to dye their hair blonde (2001). Sierra uses money to buy people and subject them to degradation and abuse at so low a price that the audience is forced to wonder if endemic government failure hasn’t flat-out subsidized the transaction, let alone created the conditions for its occurrence. Taking a page from the terrorist strategy book, Sierra makes a gratuitous show of ethical violence in order to mirror and expose its proliferation in what we might call “society.” And the show goes on because of, as Sierra says in his letter, “the freedom… art has given me… which I am not willing to resign.”

curation.ca/673/

Jan Verwoert: Why are conceptual artists painting again?

[audio:http://timothycomeau.com.s3.amazonaws.com/audio/20101101_jan_verwoert.mp3]
Download


The Justina M. Barnicke Gallery presents:

Jan Verwoert

Why are conceptual artists painting again?
Because they think it’s a good idea.

November 1, 6:00 – 8:00
George Ignatieff Theatre
Trinity College, University of Toronto
15 Devonshire Place (between Bloor and Hoskin)
Admission is free

Presented in conjunction with Traffic: Conceptual Art in Canada 1965-1980
September 10 to November 28, 2010
University of Toronto Galleries

Berlin-based critic Jan Verwoert has been examining the developments of art after Conceptualism. Held in conjunction with the exhibition Traffic: Conceptual Art in Canada 1965-1980, his lecture is concerned with the way in which the basic conditions of art practice have changed and what words and models might be used to open up the potentials at the heart of the developments in art after Conceptualism.

As he writes: “The dominant models no longer satisfy. It makes no sense to melodramatically invoke the “end of painting” (or any other medium-specific practice for that part) when the continuous emergence of fascinating work obviously proves apocalyptic endgame scenarios wrong. Yet, to pretend it were possible to go back to business as usual seems equally impossible because the radical expansion of artistic possibilities through the landslide changes of the 1960s leave medium-specific practices in the odd position of being one among many modes of artistic articulation, with no preset justification. How can we describe then what medium-specific practices like painting or sculpture can do today?

Likewise, it seems that we can still not quite convincingly describe to ourselves what Conceptual Art can be: An art of pure ideas? As if “pure” idea art were ever possible let alone desirable! An art of smart strategic moves and puns? We have advertising agencies for that. The social and political dimension of Conceptualism has been discussed, but often only in apodictic terms, not acknowledging the humour, the wit, the existential, emotional or erotic aspects, as well as the iconophile, not just iconoclast motives, that have always also been at play in the dialectics and politics of life-long conceptual practices.

Unfortunately, a certain understanding of conceptualism has had incredibly stifling effects on how people approach their practice, namely the idea that to have a concept in art means to know exactly why you do what you do – before you ever even do it. This assumption has effectively increased the pressure on artists to occupy the genius-like position of a strategist who would clearly know the rules of how to do the right thing, the legitimate thing. How could we invent a language that would describe the potentials of contemporary practice, acknowledge a sense of crisis and doubt, yet break the spell of the senseless paranoia over legitimation – and instead help to transform critical art practice into a truly gay science based on a shared sense of appreciation and irreverence?”

Jan Verwoert teaches art at the Piet Zwart Institute in Rotterdam, works as a contributing editor to frieze magazine and writes for different publications. His book Bas Jan Ader – In Search of the Miraculous was published by Afterall/MIT Press in 2006. The collection of his essays Tell Me What You Want What You Really Really Want has just been published by Sternberg Press/Piet Zwart Institute.

The lecture is presented in advance of the international conference Traffic: Conceptualism in Canada. Organized by the Justina M. Barnicke Gallery, the conference is held in conjunction with the exhibition Traffic: Conceptual Art in Canada 1965 – 1980 which is on view at the University of Toronto Galleries until November 28.

Registration opens November 1, 2010.

The exhibition and conference are made possible through the financial support of the Canada Council for the Arts, the Ontario Arts Council, the Social Sciences and Humanities Research Council of Canada and the Hal Jackman Foundation.

Justina M. Barnicke Gallery
Hart House, University of Toronto
7 Hart House Circle
Toronto, ON M5S 3H3
CANADA

jmb.gallery@utoronto.ca
www.jmbgallery.ca
416-978-8398

May 14 1934

1934-05-14_01

1934-05-14_02

1934-05-14_03

1934-05-14_04

…………………………………………………………………………………….

[Monday] May 14 [1934]

Dear Mother and Dad,

The new cookies are fine, even though not as good as old standby. So is the bread. You’ll only have one more lot to send this year, that is if you want to.

Last week was much cooler again and there hasn’t been any more swimming since I wrote you. We are also mowing lawn here, though I don’t have to mow as much as I would if I were home.

I’m sorry to hear the old car is acting up, though apparently this wasn’t anything serious. When my license renewal blank comes I wish you would send it here (or if it doesn’t come, send the old stub and I’ll get a blank.) I can fill it out an have the license sent home again.

This week we took all day trip in Farm Management. We went through Geneva and saw the Agricultural Experiment Station, then north to Lake Ontario, stopping to see three farms on the way. It was the first time I had seen the lake and it is quite like the bay at home. Of course, you can’t see the other side, and it is not as blue as the bay.

Two of the farms are in the fruit belt along the lake, and are little more than big orchards. The cherries were in blossom and the apples were just ready to come out. The trees there don’t seem to be much hurt by the winter cold.

This weekend several hundred high school boys who expect to come to Cornell soon were up to visit the place. It is a new thing this year, called Cornell Day, and seemed to go off pretty well. Two boys stayed here in the house and seemed to enjoy themselves.

Did you see any of the dust cloud the papers were talking about over New York? It must have missed us. It is pretty dry even here, though.

Friday night George and I walked down to see George Arliss in “The House of Rothschild.” It was one the best I have seen him in.

Saturday night there was an electrical exhibition in the Electrical Engineering College. They had artificial lightning, power line connections and generators, teletype, telegraph, telephone, radio, and a lot of other exhibits, all very interesting. They advertised it by a loudspeaker hung out the window. You could hear it easily a quarter of a mile away, but it was very distinct also. There was also a track meet Saturday, in which we beat Penn. very easily, so that it wasn’t very interesting.

It’s warmer now – maybe there will be swimming this afternoon.

Love,

Orville

…………………………………………………………………………………….

The House of Rothschild, from Archive.org:

His name is Pantalone (not Pants) & I’ll vote for him.

I’m not sure if this is a problem of social myopia (birds of a feather flocking together) but it seems I both keep hearing & reading that people want to vote for Joe Pantalone but feel that’s it’s a wasted vote. (I myself expressed as much in my last blog posting a week ago). Thus, everyone who’d like to vote for him is now of the mind to vote strategically. I write this because I’m now under the impression that maybe there’s a silent majority of people out there who favor him but who are being frightened into voting for his rivals.

It doesn’t help that the Globe and Mail “guardedly” endorses Smitherman while only mentioning Pantalone once in it’s 800word endorsement. The context is notable:

Mr. Fords […] is an instinctual person, lacking in analysis, and his plans have gaps and inconsistencies. His propensity to impetuous words and deeds could be embarrassing and possibly harmful to Toronto. Nonetheless, the surge in support for a man with these characteristics, in a sophisticated, cosmopolitan city, amounts to an extraordinary indictment of the status quo. It is a phenomenon that all Toronto’s politicians must take seriously; Mr. Smitherman has already repositioned himself, shifting on the ideological spectrum from where he probably would like to be. Where Mr. Ford is unrealistic, Mr. Smitherman is vague. The risk in supporting Mr. Ford is what he might do as mayor, the risk in supporting Mr. Smitherman is what he might not do. The latter of the two has failed to articulate a vision or a strategy of his own, and he could easily end up as a second David Miller – what Joe Pantalone, the third candidate, openly promises to be.

I resent the idea that I need to fear either rival, and that I should vote for Smitherman for any reason not of my choosing. I resent the media casting this (and thus skewing the pole results) as a two-person race. I resent the blackmail that a vote for Pantalone is an indirect vote for someone who is “potentially embarrassing and harmful”.

I’m going to vote for Pantalone. If Ford wins, at least we’ll have a shake-up of the status quo. If Smitherman wins, well, at least he’s not Ford. If Pantalone wins, well, at least we might finally get bike lanes and continued marginal improvements in TTC outpaced by increased fares.

Why my vote for Rob Ford would be an anti-Ford vote

For those not in Toronto: In the latest Toronto mayoralty-election news, Rocco Rossi dropped out last night, October 13th, leaving it (as the papers would have it) a contest between Rob Ford and George Smitherman. Rob Ford is considered to be an oaf, and George Smitherman was once deputy-premier of the province. Neglected from this assessment is the presence of Joe Pantalone, who quipped in a recent debate, to Smitherman, “the mayoralty is not a consolation prize for failing to become premier”

Officially, there are 40 people running for Mayor with two officially withdrawn. With the exception of the above named, the remaining 35 are considered unserious novelty candidates. Joe Pantalone has been deputy mayor under departing David Miller, and is running on his legacy.

My Facebook feed is representatives of his fan base: numerous calls stating Toronto needs pants and the like. Pants pants pants. Along with William Gibson’s latest novel, ‘tis the season for pants.

Pantalone has become the traditional NDP third party candidate who won’t and can’t win. He’s polling (Oct 13 Globe & Mail) at 11%, which is traditional NDP territory. He’ll drain votes away from the anti-Ford Smitherman and Ford will be Mayor.

However, according to the same poll, Smitherman is up 1% against Ford at 31% to 30%. Pantalone supporters – this is a given – would never vote for Ford, thus if their vote went to Smitherman, he’d win by a hefty margin: 42% against 30%.

Needless to say, our democracy is a sham, sense these numbers don’t even cross over 50%.

It’s interesting how this vote is being framed by the media as a contest between Smitherman and Ford, continually neglecting Pantalone. By keeping that narrative alive, the illusion of a contest between S&F can be maintained. The media is itself a type of conservative, conserving the narratives it has on hand; their familiarity with Smitherman as an Ontario cabinet minister means he is given favorable attention despite his admitted past-addiction to “party drugs” (coke?) and his inept handling of the eHealth portfolio, in which $1,000,000,000 dollars went missing.

I’ve only voted Conservative once in my life, during my first Federal election in 1993. At that time, I was naive enough to vote C merely because I liked the fact that we had a female Prime Minister in Kim Campbell. She famously lost to Jean Chretien, and Chretien went on to govern for ten years. In those interim elections, I began to vote for the NDP, a trend which continued right up to the last election.

Given that I have never voted for the party or candidate who ends up winning, I’m considering using this juju against Rob Ford by voting for him. My vote for Ford would thus be an anti-Ford vote.

If I voted my conscience and for the candidate who mostly represents my views, I’d join my Facebook peers and vote for Joe Pantalone, thus guaranteeing he won’t win.

An 18th Century Staple

Three weeks ago I bought a packet of receipts dated to April 1799 at a flew market. They were held together by what we’d call a finishing nail.

Henry VIII’s armour

I don’t usually do the tourist pose, but in this case I indulged. Partially because before seeing this at the Met at the end of June, I’d been watching David Starkey’s Henry: Mind of a Tyrant on TVO (which is available on iTunes). Seeing the armour made the history and the man (especially his kingly size) tangible.>

Nobody Can Ever Question

Alberta’s culture minister says:

“I sit here as a government representative for film and television in the province of Alberta and I look at what we produce and if we’re honest with ourselves, why do I produce so much shit? Why do I fund so much crap?,”

and this is a response:

“I was at a loss when I heard the statement – a complete loss and quite surprised and quite taken aback for every producer and content maker in Canada, let alone Alberta,” said CBC Television General Manager Kirstine Stewart, who was in the audience. “Nobody can ever question the quality of what we do here in Canada, creatively or otherwise.”

I take issue with the way this was instinctively (that is, without forethought) phrased:

Nobody can ever question the quality of what we do here in Canada, creatively or otherwise.

I think there’s a genuine problem in Canada when culture is subject to such dictatorial sentiments.  There is certainly a culture of complicity in place, where we are expected to fall in line or be subject to censorship. I think it’s fair to say that Freedom of Expression within this country has been perverted into a freedom of expression in support of the status quo, and within the ideological confines established by Management.

“I was at a loss …” yes of course you were, because someone says something controversial, and instead of laughing, or simply disagreeing, you have to dig in your heels and make Dear Leader statements.

What we do here in Canada is apparently fucking awesome, as the embedded movie trailers below show:

Posted via email from Timothy’s posterous

Thomas Hirschorn at the Power Plant

[audio:http://timothycomeau.com.s3.amazonaws.com/audio/20100519_thomashirschhorn.mp3]
Download

INTERNATIONAL LECTURE SERIES / Thomas Hirschhorn
May 19, 2010

Call the Harbourfront Centre Box Office at 416.973.4000 to purchase/reserve tickets.
Please note: reserved Members’ tickets will be released for resale if not picked up by the start of the lecture.

The celebrated Swiss artist Thomas Hirschhorn (born in 1957, Bern) discusses his recent Amsterdam-based project, The Bijlmer Spinoza-Festival (2009). Since the 1980s, the Paris-based former graphic designer has evolved a radical sculpture and installation practice that makes monumental works with humble materials like cardboard and packing tape to engage viewers in conversation about philosophy and global politics. Recent solo exhibitions have taken place at the Museo de Arte Contemporaneo de Castilla y Leon, Spain (2006), Musée d’art contemporain de Montréal (2007), Museo Tamayo, Mexico (2008), and the Gladstone Gallery, New York (2009). Hirschhorn has received the Marcel Duchamp Prize (2001) and the Joseph Beuys Prize (2004).

International Lecture Series Lead Donor
J. P. Bickell Foundation

Cultural Agency Supporter
Consulat Général de France à Toronto

Prices
FREE: Members
$12: Non-Members

Wednesday, May 19, 2010
7:00PM
Studio Theatre
York Quay Centre, 235 Queens Quay West
(Map)

Three Versions of Western Art History

01. Western Art history in brief

Everything makes sense up until the 1960s. Essentially, artists were craftspeople throughout history. Michelangelo really was only a housepainter, employed to illustrate The Bible. Money made the work more ornate, but the Old Masters were craftsman employed to create images such as portraits and decorated ceilings.

In the 19th Century, industrialization invented oil paint in tubes. Suddenly artists could take trains out to the countryside to paint landscapes on the weekend. (Why they wanted to paint landscapes has to do with the-then-new Romantic sensibilities). Painting outdoors, they became more interested in capturing their impressions of what they saw, rather than spend a lot of time on finishing the work according to the standards of the day.

Claude Monet, Tulip Fields in Holland, 1886

While these artists were doing this, the ‘academic’ artists had moved on from illustrating the Bible and had begun illustrating the Classical mythology of Greece and Rome.

John William Waterhouse, Ulysses and the Sirens, 1891

Because what the academics were doing was boring, the Impressionists gained popularity, due to their example of allowing an artist do to whatever they wanted. So by the time Picasso begins working, he’s all like fuck it, I’ll just draw some crazy shapes and give them eyes and call it a portrait.


Pablo Picasso Tete d’homme, 1912

Picasso distorts art history here, as the galleries get hip to what he’s doing, and realize they can sell his stuff for all sorts of reasons, including the radio-land sense of a new civilisation based on cheap energy, and so Picasso has a chateau-based life of daily doodling which sells for millions. The distortion he creates in the art market means that artists all over the Western world think to themselves, ‘if he can do it, I can do it’. Craftsman working in the 15th-19th Century traditions (late 19th Century academics and contemporary place like the Academy of Realist Art) get marginalized in favour of the gang after Picasso’s easy money and easy lifestyle.

Basically, by the 1920s, artists have full licence to do whatever they want. Picasso can call geometry a portrait, and in New York Duchamp can call a urinal a fountain. By the 1950s, artists are all like, fuck portraiture, use a camera for that, let’s just put colours together. Imagery is boring.


Mark Rothko, Orange and Yellow 1956

Artists are now doing whatever crazy shit they want to do. A bed with paint splashed on it? Fuck it, why not.


Robert Rauschenberg, Bed 1955

By the time we reach the 1960s, there has been a full breakdown of the tradition of craftsmanship.

Also, by this point, the technologies of  video & film have begun to appear, so by the 1970s, a first generation of tv babies have arrived and want to make their own tv shows, producing a lot of black & white and unwatchable television. Technology is cheap, and artists are no longer just craftspeople asked to make a statue for a garden or decorate a ceiling, they’re now in the business of ideas. Books, words on walls, videos of Buddhas staring at themselves: an explosion of cleverness and wit. The Picassoesque art market is able to absorb, promote, and sell all this stuff, to both rich people but also to Institutions.


Nam June Paik, TV Buddha 1974

We’ve now had half a century (1960-2010) of crazy-shit art. The aesthetic experience written about by 18th Century philosophers has been replaced by the WTF? impulse. Artists today are not seeking to generate emotions of the sublime or of disinterest, but rather evoking a sense of bewilderment in the viewer is seen as an achievement.

The decline of craftsmanship has been compensated for by the ego of the artist: like Duchamp, Picasso, the unwatchable video artists, the message is, yes, anyone can do this shit, but I did it. In that not all artists are insufferable egotists, a subtext to this strategy is the belief that the variety of human experience should mean that their ideas, presented through gallery or however, may be valuable to someone. The artist offers their work both as a self-promotional vehicle, but also as something that another may find useful. (Quite often, it is most commonly used as a conversation topic).

I could also refer here to Richard Rorty’s definition of genus as the useful obsession by others. Private obsessions we just call crazy, but when an individual’s ‘craziness’ opens new avenues for others, we consider that person brilliant (as in ‘they light the way for others’). The postmodern condition of this half century has been one in which people are free to make up their own truths. While it is a sign of mental health to be aware that not everyone thinks the same, when exploited it can be dangerous (ie truthiness). The crazy-shit art of the contemporary is reflecting the many truths competing for attention, and the multitude and anarchy of art-products and art-production today offers a variety of individual obsessions seeking to be useful by others.

02. Another brief history of art

The Roman portrait bust is representative of the craftsmanship of the era, used for public-relations purposes and to document the individuals of a time and place.

By the end of the Empire, the busts had declined in quality and become stylized.


A ‘barbaric’ millennium follows until the ‘regeneration’ (renaissance is a French word meaning ‘rebirth’) of both ancient art and learning begins to restore both the quality and craftsmanship, so that by the 19th Century, the academics were illustrating both the myths of Rome and Greece, and the daily street scenes of fifteen-hundred years prior.

Sir Lawrence Tadema, Sculptors in Ancient Rome 1877

The United States of America was founded in the late 18th Century as a restored Roman republic.

Horatio Greenough, George Washington as Zeus 1840

By the late 20th Century, The United States represented the completion of the project to restore Rome, and had become an Imperial power. However…

…its art had become stylized, and craftsmanship was in decline. The civilisation was exhausted. Artists were exhibiting glittered cum stains on newspapers.

Dash Snow untitled “Dead Man” 2006

03. A brief history of Western art by the Chinese

You Westerners are full of yourselves.

Conservative Contraception

My questions:

1. Wasn't the G8 made somewhat irrelevant last autumn when it was decided that the G20 would be more important?

2. Why is this our business? Like, a bunch of women in poor countries are going to care about what Canadians say and do. WTF. You'd think paternalistic programs would be something Conservatives avoid. And perhaps this is where they are coming from? I don't know. I do know that getting all upset at their ideology is a predictable distraction to the fact that this story has no substance. Why can't we have a discussion around the thesis: "Poor women are capable of taking care of themselves". If that statement is false, why? And why is it our (G8) problem as opposed to the governments of the countries where these people live? 

Birth control won't be in G8 plan to protect mothers, Tories say

Posted via email from Timothy’s posterous

Joe Stack

A guy named Joe
Blew his stack
He was an engineer
He had a thing against the IRS
He thought the tax rates were too dear
So he flew his plane into a building
After leaving a note online
Not on Facebook though,
He was not that much of his time

The internetz wrote about it
And called him a right-wing loon
Some said had he been Muslim
There’d be another war soon
But all in all it’s just a tale
Of an engineer, a website and a plane
A fool who set his house on fire
Before he flew off never to be seen again

Posted via email from Timothy’s posterous

Google Docs

I just did this diagram in less than 5 minutes using Google Docs. Had I done this in Illustrator, I would have taken a half-hour. I don’t know if that means I suck at Illustrator, or if Google Docs is AMAZING. 

Pressed to answer, I would say, no, I’m not that bad at Illustrator – it’s only I would have to create every element in the diagram from scratch. 

Google Docs provides a library of readymade elements, which cuts down on the time. Also, the library suggests they anticipate their app to be used to create flow charts such as this one, meaning everything I would need to make such an image was there, increasing efficiency. Props to Google for so thoroughly anticipating user needs. 

Secondly, Google’s experience with user interfaces (like SketchUp) meant that intuitive actions like grouping meant that I could put together shapes and move them around without having to explicitly group them, which was a bonus. 

So yes, Google Docs is AMAZING, but it helps if one knows what one’s doing to begin with. 

(BTW, this diagram illustrates the syncing relationship between Google’s cloud services and an iPhone and MacBook). 

 

Posted via email from Timothy’s posterous

Limbaugh’s NY Condo

Slide show on Business Insider

Also on Gawker (the headline this morning read 'tasteless' rather than 'gaudy'):

Rush Limbaugh's Gaudy Fifth Avenue Penthouse Is Now For Sale

It looks like Rush Limbaugh is moving ahead with his threat to leave New York City. He's (finally!) put his tacky Fifth Avenue apartment up for sale. The cost of ridding NYC of Rush once and for all? $13.95 million.

Limbaugh promised that he'd sell his Manhattan apartment last March after the Paterson administration proposed raising taxes on New York residents who make more than $500,000 a year. (That wasn't the first time he'd made the threat. On the eve of the 2008 presidential election, Limbaugh said he was "seriously considering selling it," since "it may now become stupid to own any property there.")

Limbaugh was lying at the time, unfortunately. Months after making the "threat," he'd yet to actually put the Fifth Avenue apartment on the market.

But now he has! The 20th-floor penthouse at 1049 Fifth Avenue, which Limbaugh purchased in 1994 under the name RH Trust (Rush's middle name is Hudson) wasofficially listed two weeks ago for $13.95 million. And although he's described the place on his radio show as "fashionable," it's doubtful that will be the word that comes to mind when you look at the photos below, which show off moldings of "hand painted gold leaf" and his "hand painted ceilings and walls" by "renowned artist" Richard Smith.

Posted via email from Timothy’s posterous

It’s buried under bullshit

"You walk towards your fear, you embrace your fear, you don't try to hedge it. That a part of real living as human being, as a spiritual being is to embrace and encompass your fear, your love and not run away from anything because that's the life experience. And it's in that richness that we find the most beautiful art, the most beautiful music, we find the richness of what the human soul can offer and I see all that richness buried under such bullshit." – Michael C. Ruppert in Collapse (2009)

Posted via email from Timothy’s posterous