Assistant Public Defender at Office of the Ohio Public Defender
Law Practice | Columbus, Ohio Area, US
My current passion is criminal defense law. But I've written and researched on a variety of topics, including Internet law, modern copyright law, science, media, science fiction, journalism, and new media.
My goal is to make a difference in how people understand and think about the law and how it impacts expression and our values as a society. I have engaged in litigation support and policy research in these areas. But I also love to write for a general audience about topics related to criminal law, developing technology, internet policy, and culture.
Specialties: Knowledge of and experience with criminal law, copyrights, cyberlaw, new media, blogging, and legal writing and research. Extensive experience in legal writing, research, and appellate procedure and litigation. Additional experience with Microsoft Powerpoint, Excel, Access, and Office; Adobe Audition and PhotoShop; HTML; and various blogging platforms. Some experience with CSS and XML. Working knowledge of Spanish and Hebrew. Always willing to learn.
2011 - Present
Assistant Public Defender / Office of the Ohio Public Defender
Law Clerk / Office of the Ohio Public Defender
Lawyer / Independent practice
io9 Contributor / Gawker Media
Senior Intern / The Chicagoland & Suburban Law Firm, P.C.
It’s time for episode two of Hold On To Your Butts. This time, we chat about the Cleveland Kidnapping hero, Charles Ramsey. We also explore celebrity twitter breakdowns. Finally, we discuss some of your feedback from last episode. Thanks for listening! And, as always, please email us with feedback! (CAUTION: there are some swears in this.)
It's time for episode two of Hold On To Your Butts. This time, we chat about the Cleveland Kidnapping hero, Charles Ramsey. We also explore celebrity twitter breakdowns. Finally, we discuss some of your feedback from last episode. Thanks for listening! And, as always, please email us with feedback! (CAUTION: there are some swears in this.)
Subscribe to our podcast using THIS FEED or SUBSCRIBE THROUGH ITUNES!
Relevant links (some are NSFW):
Charles Ramsey Auto-tuned
The Gregory Brothers
Kai the Hitchhiker
Amanda Bynes on Twitter
State of Decay
Our Theme Music: "The Adventure Lights," by Skip Cloud
Also featuring "Friend of Mine" by the French Girls and "Did I Miss Something Right Now?" by Roads.
Well, here it is! It’s the first episode of the Enchantment Under the Sea podcast, “Hold On To Your Butts.” We’re trying this whole thing out, and this was actually only a test episode, so things will probably change dramatically between this episode and our next one. Stay tuned! And, feel free to email us with any feedback. (CAUTION: there are some swears in this.)
Well, here it is! It's the first episode of the Enchantment Under the Sea podcast, "Hold On To Your Butts." We're trying this whole thing out, and this was actually only a test episode, so things will probably change dramatically between this episode and our next one. Stay tuned! And, feel free to email us with any feedback. (CAUTION: there are some swears in this.)
(NOTE: You can subscribe to our podcast using this feed, but we'll hopefully have an iTunes page for the podcast soon or SUBSCRIBE THROUGH ITUNES!)
Relevant links (most of which are sweary and, as a result, NSFW):
Newscaster swears on air
Reddit hunts for Boston Bomber
"I would like to extend you a counter-offer..."
Tilda Swinton / Ebert Dance Party
Our Theme Music: "The Adventure Lights," by Skip Cloud
By way of setting the scene, I should probably mention the turkey burgers. I was out to dinner at a local place called Barleys with my friends Al and Colleen. When we ordered dinner, by some miracle of taste convergence (or maybe just social psychology), we each independently decided to order the turkey burger. The three of us did share a lot of common tastes, so, as often happens when we get together, each of us was talking about the music we’d been listening to recently. I predictably started blathering about the new Superchunk record,1 which I had become obsessed with in the preceding weeks. But during that conversation, Al said something that turned out to be as important as it was surprising. He told me he’d been listening to the new Kesha album.
Now, as a lot of people I half-drunkenly ranted at in college would be able to tell you, I spent many years being a pretty textbook indie music snob. At the time, I thought this snobbery was the result of painstakingly cultivating objectively good taste in authentic indie music. What that boils down to is that I was a dick to people about their fondness for Britney Spears and as a matter of course lambasted peoples’ “guilty pleasures.”2
Eventually I started thinking about why I’d developed this snobbery, and, incrementally, I realized that snobbery was less about objectively good music and more about performance and social positioning. I wanted to be a guy who knew good music, so I acted like the things that my friends and I liked were good music, period, and if you disagreed, you weren’t the kind of person I wanted to be around. When I realized this about my snobbish posturing, it made it easier to talk about taste and culture without being an asshole.
Easier, but certainly not impossible. There was still a Kesha-shaped alcove in my cultural appreciation apparatus. As Kesha rose to popularity over the years, despite all of my hard thinking about taste and authenticity in music, I’d developed a miniature knee jerk hatred of her.
So. Back to Barleys, where Al had just invoked Kesha’s name in a pretty surprising context: sandwiched between the Dirty Projectors and the Talking Heads. The conversation had taken a turn. I put down my turkey burger.
What my friends had to say at first about their recent foray into pop music actually fit pretty neatly into my existing schema for cultural criticism: they first extolled the kinds of technical innovation and formal experiments happening in pop music production, an easy avenue into appreciation for someone like me. Another such avenue: one prominent pop producer used to make weird lo-fi electronica under a different name, another used to be part of a 90s alternative duo, etc.3 This way of thinking was not shocking. This kind of 6 degrees of indie credibility game is endemic to the way I usually think about authenticity in music. Art I might otherwise snobbishly consider unworthy basically borrows authenticity from the cultural experiences I’m already behind, the ones I have deemed authentic and worthy.
But. There was something else going on. The whole conversation also had hints of something that I couldn’t quantify until later, something more rooted in the experience of listening to a song and less in the thinking about it. That other unquantifiable thing is what I’m here to write about.
Here’s the thing about the Demi Lovato song above: it’s fun as hell to listen to. I tried writing this post once without the song playing, and it was not easy. Something is happening inside of the song that is really hard to recapture when it’s not playing. It’s happening in the intersection between my head and the song, in the music-hearing part of my brain where the song lands.
So here’s a mostly unnecessary breakdown of the sounds and bits that make up this song, produced by theSUSPEX.4 It’s got a lot of those clean synths you hear everywhere in modern pop music that are blended with, and sometimes indistinguishable from, strings (see also Rihanna’s “Diamonds,”Bieber’s “As Long As You Love Me,” and utterly mercilessly throughout David Guetta’s recent hits, to name only three of the hundreds). And the bass is almost exclusively slightly distorted synths. But what’s so cool about the way the song uses these electropop elements is that they’re all pretty distinct from each other and spread out across the stereo channels. I’ve talked before about creating space in music, and there’s a lot of space in the big, big chorus. And the bridge is just as big and brash.
My favorite aspect of this song, though, is the bright acoustic guitar that’s right on top of those staccato bass synths in the pre-chorus hook. This kind of acoustic guitar sound is part of the mix for a lot of pop music (for some artists more than others). But it’s really bright and high in the mix here and prettily double-tracked on both stereo sides. The synthesized bass, the pounding kick, and the acoustic guitar all add up to this marching thing that makes you surge every time it punches in (Coldplay also makes frequent use of this trick). When the last repetition of the pre-chorus hook rolls around, it’s bare-bones acoustic guitar at first, and it grows, but it’s got this little drop right before the final chorus. In a perfect pop moment, the chorus explodes in, and Lovato skips the arpeggio of the chorus hook and jumps right to the soaring top note. The result is a total gut-punch.
The star of the song is obviously Lovato’s voice and that killer melody that she totally owns. It arches up and down the way a great vocal hook should. And it’s so damn catchy. The song’s also very lyrically sincere, not hiding behind anything explicitly brainy or critical. Lovato is simply saying that a real love, an honest, heartfelt love, could just be too much and very well might kill her. That already sounds like exactly what I want from a pop song.
The resulting piece of music is just a blast of pop songcraft that triggers all of the right places in my brain. It’s simple and catchy, and it’s also a little weird and a little surprising. In short, it’s a cause for celebration for poptimism.
Pauline Kael seemed to get the closest to actually infusing in her criticism the role that the bodily, emotional experience that art can be. She once said that the experience of seeing a good film “has some connection with the way we reacted to movies in childhood: how we came to love them and feel they were ours — not an art that we learned over the years to appreciate but simply and immediately ours.” And Nitsuh Abebe has obviously written very clearly and convincingly about this kind of experience with art.5 There’s no need to venerate this feeling, no need to exotify the “teen girl” pop art experience: we all have it, yet cultural criticism at large seems to be missing the tools to talk about it.
And that’s what’s so important about this particular pop experience. People that think of themselves as critical cultural consumers will, without even meaning to, develop criteria for determining if a thing is good or bad. I myself have done this. These criteria tend to be brain-based, textual, cultural, historical. But even those criteria are just a way of trying to describe post-hoc the pleasure that a piece of art invokes. Sometimes it’s the challenge, the grappling with something that enhances its pleasure. Sometimes the pleasure comes solely from the emotional triggers of a redemptive story or well-timed musical crescendo. The point is, it’s all the same thing. A positive experience from a piece of art is simply a pleasure, no matter how detached and brainy the description of it might be.
The danger, at least for me, is that there’s a temptation to think of the brainier and more detached and more analytical pleasures as being more valuable pleasures. I, for some reason, have always had an inherent distrust of those pleasures that I feel without the cooperation of my analytical self, those pleasures that I can’t explain using my traditional cultural criticism tools (it is obviously very likely that this is encoded culturally in gender, but I’ll leave that to the people I know who are better at that kind of thinking, like, for instance, Nitsuh Abebe, and all of the people he mentions in that article). In the past, I’ve gone as far as thinking these pleasures were not actual pleasures, were mere tricks played on me by music producers.
Obviously these kinds of pleasures can still be talked about critically, but the language for doing so is drastically different than the language that I would use to describe something like experimental jazz or math rock. The (again possibly gendered) mainstream critical kit bag doesn’t come equipped with the tools to talk about this. But it should! How easily something enters my brain and sticks there is just a much of a thing to praise as any musical technical achievement, maybe more so! We don’t think about how hard a pop song has to work to still feel easy, all the while hiding all of that work. It’s at least as hard as making a good turkey burger.
That leaves me with two ways of getting at the pleasure a piece of pop art has to offer: one is analytical, and one is bodily and automatic. If a pop song manages to go straight to the pop-appreciating automatic part of my brain while easily sidestepping my hypercritical distrust of simple pleasures, it’s an unqualified success.
That’s this song. This crazy good song.6
1 Keen-eyed observers might realize that, in fact, the new Superchunk record is basically a pop record, and my blathering about it already contained the seeds of my undoing re: Kesha.
2 I love this phrase. It’s just a way for people experiencing the kind of pleasure in pop music I’m trying to describe here to minimize it and still maintain their “critical” position. Now, when people use this term, I try to remind them that no one should feel guilty for deriving pleasure from art of any kind. Instead, they should think about why they enjoy the thing and also why they feel guilty enjoying it.
3 These examples are half-truths that are half-remembered from that conversation. But there are many examples of this kind of thing in modern pop music, as you’ll see when we talk about this track’s producers. Another of my favorite examples: the guy responsible for Geggy Tah’s “Whoever You Are” also co-wrote and produced P!nk’s “Blow Me One Last Kiss.”
4 Like I said, there are weird musical connections all over the current pop production world: half of this production duo is Mitch Allan, the man behind early-00′s pop punk band SR-71.
5 Seriously, Abebe’s piece is so important and says a lot of what I’m trying to say here, only more clearly. AND it tackles some aspects of this whole thing that I can’t even begin to get at. Go read it. It’s one of my favorite things I have EVER read about criticism.
6 This article would previously have been published under the banner of the ill-conceived “Unexpected Pleasures” series. I still believe pretty seriously that pleasures in culture can be found anywhere, but the “unexpected” part of the name, in retrospect, seems to be implying that the piece of art I’m talking about should have been bad, or that the art is good despite something about itself. That’s not what I’m trying to convey, so I’m not including that tagline in these posts anymore.
One answer might be that it’s supposed to just be a fun action story. Another might be that it’s necessary set-up for the bigger events unfolding in later films. Either way, the film doesn’t quite get there, instead being too big to be pure fun, and too pointless to be epic.
An Unexpected Journey is, I guess, the first in the trilogy that will make up the story of The Hobbit. I read the book a very long time ago, and I don’t remember enough of it (or care enough about it) to be a purist. But I do remember thinking it was really exciting and really fun. I can’t say the same for this film.
An Unexpected Journey doesn’t really work on its own. It could have. There are things in this movie that would have made for a pretty fantastic, pretty fleet, and actually surprisingly affecting adventure story. Instead it’s flabby, self-important, and intermittently soporific.
Here’s an example: early-ish in the movie, we’re introduced to a brown wizard. He’s skittish and goofy, his voice comically lilting as he throws things around his ramshackle yet whimsical cabin. He seems more like a squirrel than a noble guardian of Middle Earth.
This stuff with the brown wizard is intended to set in motion some important plot machine that eventually (one must assume) leads to the rise of Sauron and the events of Lord of the Rings movies. But the brown wizard is also there, I presume, for a little comic relief.
I don’t think this element succeeds on either front. Tonally, his involvement is mostly goofy, but it also has the burden of having to be important, because the brown wizard is the only one that’s actually seen the coming dark and terrible threat. That mismatch, swinging wildly from goofus to gallant, is hard to stomach.
Note that I called the guy the brown wizard; I can’t even remember his name. His role is limited to babbling about nature, turning some gears in the rise-of-Sauron story, and having bird shit on his face (not a joke; this is an important part of his character, apparently). Each time he showed up, I checked my watch. You’re not supposed to do that in an epic, swashbuckling adventure story.
I checked my watch a lot, in fact. There were some great action shots and sweeping vistas and moderately interesting expository flashbacks. But there were also repetitive and confusing action shots, samey vistas, and uninteresting expository conversations. (My roommate Justin said he had many thoughts about this film, but “the short version is that there should have been a short version. Zing!”)
On balance, about one third of this movie is great. That, unfortunately, leads me to believe that about one third of each movie in the trilogy will be great, and this whole ordeal would have actually made one great movie.
The great one third of this movie happens mostly to be the last third. Near the end of the film, things start happening. Bilbo has a realization that he loves his home in the Shire, and he’ll gladly help the dwarfs fight for theirs, a moment of real emotion in an otherwise emotionally calm sea. Bilbo also has a really great encounter with Gollum, who is the most interesting thing in this movie. But it’s all blended in with kinetic and visually interesting but meaningless chases through what look like the either ruins in The Elder Scrolls games or caves in the Fallout games.
The whole thing, in fact, kind of looks like a video game. In fact, I don’t know that it’s fair to call this a live action film. Pretty much all of the fighting the band of journeymen (and no journeywomen whatsoever) does is against CGI enemies. And there are too many of these swarming goons for us to really even care when the party defeats one (or a dozen). There’s an attempt at a climactic battle against a head orc of some kind, and that could feel important, but instead, it’s messy and fiery and ultimately has no resolution.
There’s a chance that most of the film’s weaknesses come from it being the first film in a trilogy. But remember: it was the film’s choice to be this way. This isn’t Star Wars, where there were three distinct phases in the Hero’s Journey and battle against the Empire. Hell, this isn’t even Lord of the Rings, which had important events in each film to make each one feel like it had stakes. This is just a small adventure story bloated to epic proportions. Maybe we shouldn’t be surprised that it waddles more than it flies.
I’ve always been a fan of traditional Indian music. I used to search Limewire for long live recordings of ragas and tabla solos. The best recordings combined severe technical proficiency with a fun, improvisational feeling. The rhythms and melodies were complex, but there was something very satisfying and whole about the compositions, no matter how fractured they could appear if you thought too much about what time signature they were using.
Recently, that interest led me to a new style of music. I’ve been exploring the golden age of Bollywood cinema, the music produced for Indian popular cinema in the 50s and 60s.
For example: the song “Ganga Aaye Kahan Se” (embedded above) is a folk ballad about the sacred Ganges river, sung by Hemant Kumar in the 1961 film Kabuliwala. The film is about a trader from Afghanistan (Kabuliwala meaning essentially vendor from Kabul). From what I can tell, the story is complex, involving murder, shady business dealings, and melodrama. But the song itself, as the lyrics in the embedded video above illustrate, is an ode to the majestic Ganges river, a river central to Indian culture and religion.
The song sounds, at first blush, like the kind of churning, driving folk song that might have been sung by workers on the ganges itself. In fact, despite the very little information I can find about this song on the internet, this video seems to indicate that “Ganga Aaye Kahan Se” is possibly based on an earlier Bengali boatman folk song.
Despite its origin as a churning work song, and despite the very traditional instrumentation, the rhythm section of the song is actually mixed very low. The real star of the song is obviously Kumar’s voice. All of the best Bollywood playback singers had a knack for the hovering and swooping vocal parts that characterized the genre (of course derived from similarly swooping and careening vocal improvisations in classical Indian music). The beauty of Khan’s performance is that he’s not pushing his vocal lines up by force, he’s gently pulling them up. What I mean is, there is a sheer power behind a singer like Nusrat Fateh Ali Khan (though he’s from Pakistan, not India). And that sheer power is pretty impressive. But Kumar’s delivery is gentle by contrast. Where his singing could compress, it instead breathes.
The result is something that flies less like a rocket and more like a glider. It’s gentle and reverent, which makes sense considering the subject matter. The whole thing is otherworldly, mesmerizing. It’s probably the closest someone like me can get to understanding the mystery and importance of the Ganges river from which the song draws.
Another song I’ve come to love recently is “Sawan Ka Mahina,” a duet performed by Mukesh and Lata Mangeshkar. In the film from which the song comes, Milan, the song serves a very modern purpose: as a framing device for a montage demonstrating the unrequited love of a man for a woman. You can see the sequence above (again featuring the river pretty prominently).
The song starts with the man teaching the woman how to sing the song itself. Her confidence grows through the lesson, and we see them spending time together throughout the middle section of the song. It ends with the woman performing the song flawlessly, and to great applause, as a result of the man’s lessons. It’s almost beat-for-beat the exact same kind of scene you’d see in a modern romantic comedy. And the song itself is simple and beautiful.
I know so little about this whole genre and its cultural implications, so if anyone reading this has some direction for me or any recommendations, please let me know.
Welcome back to the not-entirely-irregular feature that I’m calling “Unexpected Pleasures.” It’s about trying things that I might otherwise dismiss to discover the joys hidden inside. Send suggestions to email@example.com.
The Footloose remake that just came out last week opens in the only way it ever could: a pretty slavish, but clean and stylish, recreation of the titles of the original, which was a series of close-ups of dancing feet. This opening certainly prepares us for what is to come, as it’s not the only slavish recreation we’re going to see (those of you that were into the original will see familiar angry-warehouse-dancing, confetti-storms, and even a rusty yellow VW Bug).
But then, maybe if this remake was a little MORE slavishly dedicated to shot-for-shot re-creation, it’s pleasures might be easier to discover.
I think the largest flaw obscuring the pleasures of this remake is that it’s trying too hard to please basically everyone. For example, there’s a weirdly action-packed bus race in the middle of the film, complete with multiple crashes and fires. And the dancing is more inspired by Step Up than the Brat Pack, presumably because the nod to a more recent dance movie will draw a bigger crowd of young people. There’s also a lot of country music and whole lines of dialogue that seem designed only to pay respect to middle-America or “Christian values.”
But then there are also those scenes that are literally line-for-line cribbed from the original. The music is mostly exactly the same. The iconic confetti-dusted “let’s dance!” is virtually unchanged, and every element during the warehouse dance scene, down to the slats in the walls, looks identical to the original film. This remake is desperately trying to bring in the Step Up-loving children of the 00s right alongside the Kevin Bacon-loving children of the 80s. The result is kind of a mess that’s hard to love from either of those angles.
Aside from the bigger tone and construction problems, the movie also has a host of other more basic problems. For instance, it’s kind of embarrassingly obsessed with the body of its female lead (Ariel, played by Julianne Hough). The camera is drawn inexorably to her rear end or her exposed bellybutton in every shot in which she appears. That means that a scene that is otherwise about a bus chase ends up with her taking her shirt off, or a scene that’s about the joys of country line dancing includes the male lead (Kenny Wormland as Ren) essentially licking her stomach and face. (And don’t get me started on the incompatibility of the innocent, virginal, “maybe I’ll kiss you someday” church-love of some scenes and the near-obscene body-licking of the others… the two leads have been grinding against each other for about a half hour before their “first kiss.”)
Another pretty major flaw here is that, despite the film’s message that dancing is all about celebrating fun and youth, the dance sequences are almost never actually fun. They’re edited like bad action movies, with not much clue where to focus or what’s worth looking at. Wormland seems to have some serious dancing training, but his training means he’s dancing with a style that is more about intensity than exuberance. The result is that, while other actors (most notably Miles Teller as Willard) are nailing the little skips and arm swings that make every member of the audience want to dance, Ren and Ariel are more concerned with crumpled-up stomping and arm-waving and booty-shaking. The dancing mostly just doesn’t look any good, or any fun.
And it’s a damn shame that the fun dance sequences are dangerously outnumbered by the either unnecessarily sexy or jumbled and cluttered ones, because the fun ones are a LOT of fun. The big finish, where Willard has learned to dance and he and Ren get on the prom floor together, is the most fun dance sequence in the film (it’s actually even MORE fun than their duet in the original). It’s still a little poorly edited and the camera angels are still not doing anyone any favors, but it’s just so fun to see the two of them next to each other and totally in sync (something we haven’t yet seen in the film, since when Ren and Ariel are in sync, they’re basically dry-humping).
And I have to also mention that Teller, as Willard, is fantastic, and the update on his character is mostly perfect. I didn’t really catch until the middle of the movie that his defining characteristic is that he always gets in fights. It’s almost as if the movie suspected that there was a little too much nuance to the character, so they have his girlfriend say to him “now don’t go getting in a fight like you always do” to remind us that he’s meant to be a one-note character. He’s not, and despite this weird undercurrent, Willard is the most interesting and fun to watch character (which in turn makes him the most realistically human person in the film).
That also means that the montage of him learning to dance is by far the best section of the film (another instances where this film actually improves on the original). Willard increasingly competently dances all over town and alongside excited and adorable children. He moves fluidly, not jerkily and showily. And he smiles. Not mischievously, not devilishly or suggestively. Just genuinely. The dialogue he has to say is, just like everyone else’s, often pretty dumb, but his performance alone is still almost worth the price of admission. If there’s a real, unadulterated pleasure in this film, it’s watching Teller act and dance.
So there are pleasures here: the exuberant joy of dancing shines through here and there, and there’s a standout performance woven in here. Maybe the fun of this movie is just how unexpected these pleasures are, buried as they are under layers of unnecessariness.
Rise of the Planet of the Apes is the tale of an ape, given the gift of hyper-intelligence, at the tipping point between evolving and maintaining his animal nature, caught between something bold and new and something simple. It’s oddly apt that the film itself also teeters between bold and simple. It’s got the simple appeal of a nostalgia-fueled action film, but it’s also reaching for something more complex and lasting. Let’s see where it comes out…
Despite its pedigree, the film faces challenges from the get-go; for starters, you come in worrying that you’ll find it difficult to take James Franco seriously enough to not only envision him as a scientist, but also to care about his science (a new drug that genetically manipulates Alzheimer’s disease patients to cure them – I see no downsides looming!). Luckily, it turns out that this movie, like us, doesn’t really care about James Franco, or really even the science; it’s got a better man in its sights.
Or I guess better ape. The real protagonist of this film is actually Caesar, the genetically-modified intelligent ape that Franco has created with his Alzheimer’s disease cure. The film follows Caesar from birth to rapid development, on through becoming a part of Franco’s family, and ending up, via a tragic event, thrust into the company of a number of apes that are much less evolved than he. Caesar’s journey is the heart of this film, and it’s certainly a vital one.
That journey is portrayed really artfully. Thanks to some adept and stylish directing by Rupert Wyatt, Caesar’s early life swinging wildly and fluidly through his makeshift bedroom segues easily into the intense territorial struggles between the apes, which eventually becomes a seriously smart and entertaining set-piece final battle between ape and man. Caesar’s story is so well-told (in no small part due to evocative motion capture acting by Andy Serkis and the large visual effects team behind him), and we feel right along with him throughout the emotional journey he’s experiencing, experiencing these emotions for the very first time through his eyes.
Though maybe, in looking back on the film, I’m seeing something different than was presented to me. The truth is, while Caesar is the center, and his bits are really great, the film spends a lot of its time on the pseudoscience of the drug and the politics and ethics of its testing. It also wastes precious seconds on Franco’s “love interest” (that honestly couldn’t be called anything more than merely his “interest;” theromance in this story is imperceptible and unexplored).
No, the things I remember about the film are the slow building tribe of rebellious apes, the swelling humanity of our hero Cesar, the flawless character arc that drives this leader to his inevitable coup, and the breathless action sequences. Those things were all really interesting and really well-done. And by themselves, they represent some of the best film-making this year.
That might be why I have some lingering doubts about praising this film unequivocally: I can’t help but wonder why exactly I’ve forgotten so much of the film, why only the things I liked are leaping to my mind.
Don’t mistake me: these somewhat forgettable sections are not really bad. They’re just conventional and inconsequential. The real trouble is that the rest is brilliant; it’s the most elegant and moving portrayal of the humanization of a non-human I’ve seen in a while. That disproportionate quality problem is what really irks me here.
So where does that leave us? Maybe this movie rests in that weird valley where Caesar himself lingers for most of this movie: he’s so human in so many ways, but still so wild, so animal in others. This film is equally stuck, but between brilliance and convention. And the film, unfortunately, will never get the chance to evolve.
Bitches, boasting, Benzes, bullets: one of the biggest quandaries facing white, hipster hip-hop fans is rappers’ propensity to talk about themselves, their guns, their money, and their cars, all whilst talking shit about other rappers, talking shit about women, and just plain shit talking. It can be tiring for this humble, white listener, who considers himself something of a feminist. But I think I, and, by extension, my white hipster brethren, give Jay-Z a pass because of the “authenticity” thing.
The major challenge facing the collaboration that makes up Watch the Throne is that I extend no such courtesy to Kanye West.
Here’s where Jay-Z gets his cred: Hove comes from the streets, and has lifted himself up. It’s Horatio Alger from the corner, and he can’t forget his past. So even though he is married, in his 40s, and likely spends most of his time pulling business deals, we can overlook the fact that you’d never know it listening to his lyrics these days because he’s from the hood.
Then there’s Kanye. His dad was a photojournalist. His mom was a professor. He got A’s and B’s in high school in a middle class neighborhood in Chicago. And he almost certainly lacks any semblance of self-awareness. So besides sounding like a spoiled frat boy when he raps about drugs and cars, he also happens to cast himself as a pompous douche.
That being said, I kind of consider him to be the Quentin Tarentino of the hip-hop world — undoubtedly talented, undoubtedly arrogant, but a little stiff. Both Kanye and QT are obviously obsessed with their craft, taking in copious volumes of information and spitting it back out. But along those lines, it frequently feels like Kanye the rapper (like Quarentino the screenwriter) just follows the hip-hop playbook, reaching out into the ether and pasting together various affectations and tropes (casual misogyny, self-aggrandizement, religious imagery, shout-outs to his momma).
But unlike Tarantino, Kanye works in a medium that largely redeems itself from a content standpoint only because it claims to portray “real life.”
And that brings me to my nut graf. I approached Watch the Throne with some trepidation. I am a late arrival to the Kanye West party; while I’ve always appreciated his extreme talent as a producer (he lifted up The Blueprint, no doubt), his albums are unfortunately full of him rapping. Kanye’s mic skills are just weak. Juxtaposing him next to the greatest living rapper, I thought, would expose him for the mediocre MC that he is.
I was kind of right, and kind of wrong, actually. Kanye surprisingly holds his own as a rapper, sort of, on this album. On “Ni**** in Paris,” one of the West’s standout takes on a reasonably conventional mid-tempo hip-hop track, his nasal, singsong voice actually fits into the flow of the song after Jay-Z tags him in, and he actually mixes up his rhyming pattern once in awhile.
There’s a lot for a fan of either artist to enjoy on this album, and the first few tracks are strong (“No Church in the Wild” is a killer opener). Thanks to Kanye’s trademark eclecticism and a bevy of guest producers, Watch the Throne dabbles in West Coast hip-hop, rock, soul, classical and club music, and dubstep. But it lacks the cohesion of Kanye’s albums, production-wise, for some reason. This album is uneven.
From a delivery standpoint, Jay-Z and Kanye aren’t the next rap supergroup by any means. But maybe they’ve rubbed off on each other a little. In contrast to The Blueprint, on which a young Kanye submitted to Jay-Z’s overall vision, Jay-Z on Watch the Throne lyrically adapts some of Kanye’s flavor: obsession with emulating a rockstar, rapping about his elite social status, cocaine, and European stuff. (“The Black Axl Rose,” Jay-Z calls himself at one point.)
But just because he exceeded my low expectations as a rapper, that doesn’t mean Kanye avoided the things that makes a large part of me lukewarm to Kanye West’s music (just for the record, another part loves it).
On “Gotta Have It” (co-produced by The Neptunes, by the way), Kanye and Jay-Z trade back and forth about how they will raise their as-of-now non-existent sons. And actually now that I think about it, that’s one of the few similarities between Jay-Z and Kanye as MCs – both will occasionally slip an introspective, self-critical verse or track into the typical hip-hop braggadocio (Jay-Z’s are occasionally incisive and thought-provoking, while Kanye generally seems like a robot executing some kind of self-reflection program). Still, it seems like a reasonably appropriate subject with some promise. Kanye leads off:
And I’ll never let my son have an ego / he’ll be nice to everyone wherever we go / I mean, I might even make ‘em be Republican / So everybody know he love white people.
Ok, so far so good, and a little funny in that Kanye West, cultural/political non-sequitur kind of way. But wait, was that an obtuse reference to Kanye’s big dumb Katrina fail? Well. It’s kind of vague I guess. But hold the phone. About a half-couplet later:
And get caught up with the groupies in the whirlwind / And I’ll never let ‘em ever hit the telethon / I mean even if people dyin’ and the world ends / See, I just want him to have an easy life.
Oh no he didn’t.
Reflecting on Hurricane Katrina, Kanye West, who in a later track “Murder to Excellence,” laments black on black crime.
In the past if you picture events like a black tie / What’s the last thing you expect to see: a black guy / What’s the life expectancy for black guys? / The system’s working effectively, that’s why.
And so it rings hollow when Kanye tries to follow Jay-Z’s lead in ruminating on the downtrodden state of African-Americans. While Hove has personal experiences to reflect upon, Kanye raps from the perspective of a middle-class egomaniac. Nothing illustrates this better when in Kanye’s mind, apparently, the biggest victim of Hurricane Katrina (which killed or displaced literally thousands of black people) is himself, because he’s just so damn misunderstood.
That’s the problem with Kanye West, and with Watch the Throne. And it’s a big, almost fatal problem.
Buried deep in the back of the opening track of Bon Iver’s recent self-titled album is the click of drumsticks. In front is a guitar line that, by itself, is haunting and beautiful enough. But buried deep behind the beautiful things on the surface are the things like those clicking drumsticks, the things that creep up slowly, the things that fill in the space around the more obvious (more easy) beauty.
Bon Iver’s For Emma, Forever Ago was a careful study in intimacy and the smallness of the sonic space there, but that kind of intimacy is an easy sell. This record is all about what happens to that intimacy when the walls are pushed back to let in… well, everything.
The walls getting pushed back might be more than just a stylistic choice; Bon Iver has developed a serious amount of cachet in the world of indie singer-songwriters. And the transition into more success and into bigger studios often causes these confessional singer-songwriters to step back from the lo-fi microphones. But while some musicians let their sound bloat to fill that newly created space, Bon Iver has maintained a tight rein on his sound to instead fill the space surrounding it with layers, revealing a more subtle beauty.
This new space is the reason why a lot of lo-fi, acoustic artists have sort of faltered on the leap from home-recorded tracks of just their voice and a guitar to the more ornamented sound that often comes with more money. One major example of this is obviously Iron and Wine, a band that never seemed to recapture the understated beauty of the early lo-fi recordings after adding a band and orchestra. Iron and Wine’s songs remained lyrically adept and still evoked some real pathos, but they were presented in a way that, for all of their prettiness, still felt too big for their own good.
Bon Iver, on the other hand, has thrived after his sonic expansion. He’s folded in a few new styles (there’s more than a little R&B and 80s pop here), and he’s taken a few lessons from the masters of big spaces (the sound of this album evokes Sigur Ros and Sufjan Stevens). The album comes complete with the Bon Iver staples: barely-sensible lyrics (more constructed for their aesthetic worth than written for their meaning) and lilting falsetto. It’s all not just bigger, but also more full (note the pleasant surprise of singer and mastermind Justin Vernon’s full voice rumbling through periodically).
But that fullness is only half of the picture of what makes the record so great. For all of its depth, this album is also still very interested in the warmth of intimacy. And it’s a paradoxical intimacy here, more akin to the intimacy of a great orator in front of a rumbling crowd than the intimacy of a confessor in a lonely room.
While the intimacy of someone confessing their deep feelings over light acoustic guitar in a stark barn can be obvious and palpable and affecting, it’s also cheap. Bon Iver instead opts to put a lot of space around himself, letting the sounds fill the space, creating a more expansive beauty. It’s a hard trick to pull off, the intimacy of a crowded room. But when it’s pulled off so well, it’s pretty remarkable.
Remember that classic scene in E.T. where the government agents violently interrogate and then kill Elliot’s school teacher? Or remember that scene in The Goonies where Chunk’s dad and Mikey’s dad expose their history of mutual hatred? How about that scene in Close Encounters where the aliens eat human flesh?
Yeah. Neither do I. But apparently J.J. Abrams does. Those three things are all things that happen in Super 8, Abrams’s presumptive homage to the fun sci-fi features of his childhood, like those mentioned above. And I only bring them up this way because the in-kind difference that makes those scenes sound stupid in those respective movies is what makes Super 8 kind of a problem.
I’ll start by saying that Super 8 is best described as a mostly classically Spielbergian film, contrasting a rollicking, wondrous adventure with the personal journeys of its characters. In that tradition, it’s got Joel Courtney as Joe, the bright-eyed, slightly shy protagonist pre-teen, and Elle Fanning as the equally pre-teen, equally bright-eyed light-touch love interest. There’s an arc about Joe’s grief over his recently-deceased mother and about his relationship with his dad (played by Kyle Chandler, who is quite boring in this role). It’s also got banter between children, the aspect that most securely anchors this film in reality.
But it’s also got, as alluded to above, some seriously action-influenced elements that would have stuck out like a sore thumb in a Spielberg film. In essence, as many people have said before about this film, it’s basically two different movies, and only one of those qualifies as a throwback vintage summer romp. The other is kind of a mess.
Before I get started in earnest, I want to hedge against the obvious problem with thinking of Super 8 as chiefly a Steven Spielberg homage. Abrams is certainly allowed to make his own movie in any way he wants, and to pigeonhole this film into being only the light-hearted Spielberg-y romp that it seems to (at least partially) want to be would obviously be unfair.
But I have some related concerns, which I think are completely fair. Whether your film is a direct homage or not, it’s still squarely a bad idea to make some parts of your film feel like The Goonies and make other parts feel like Transformers. Because these two worlds don’t collide as neatly as Abrams might think. These two types of storytelling are not only stylistically incompatible, but also philosophically opposed.
One explores the emotional territory of the response to the unknown, but the other relies on the outsized menace of a slavering beast. One is about growing up and learning about the world and yourself, and the other is mostly just busy action. One is philosophical and wondrous, and the other is pulpy and escapist. One is Roddenberry and the connection with new consciousness, and the other is Lovecraft and the horror of the unknowable.
In short, Steven Spielberg’s style was never really suited for the kind of specified fear that monster movies play with. His style has mostly been about the fear of growing up, and how that fear starts to fade when we connect with these monsters and realize that they aren’t at all the monsters we assumed they were. The monster here remains almost entirely a monster until the bitter end.
You can sort of tell that Abrams knows he’s getting this wrong, too. After an extended running-and-dodging sequence in the alien monster’s hidey hole, the whole production slows down for what is supposed to be a heart-felt moment of shared experience between Joe and the alien. But it’s not much of a connection (the beast can’t speak, and its face is uncompromisingly non-anthropomorphized), and the ham-fisted “connection” is fleeting, lasting mere seconds. It’s almost as if Abrams realized he was too far into Cloverfield territory and had to pull back with a half-assed pathos-grab.
This scene looks even more awkward when you compare it to the scenes it’s presumably paying homage to. The two characters in Super 8 bond over how much pain and meaningless suffering there is in the world. Compare that to a heartfelt connection over Reese’s Pieces, or friends enacting a pirate fantasy, or an inter-species musical exchange. Super 8‘s moment of understanding isn’t as elegant or as fun as any of these. The film just doesn’t put in the work and EARN the barrier-crossing connection the way Spielberg nearly always did.
There’s a couple of blatant pathos-grabs near the end of this film, but this one is the most awkward. One of the arcs (the one about letting go and becoming your own person and growing up) culminates in a symbolic gesture that actually did make me thrill a little (but only for a moment; within seconds, a bit of dialogue between father and son seems to make the opposite point, that there’s no need to grow up so long as daddy’s still there). The balance between busy monster-fueled action and actual human-being-fueled adventure is just all out of whack, making Super 8 feel distinctly un-Spielberg-y.
And that’s fine, Abrams gets to make whatever movie he wants to make. He doesn’t HAVE to make a Spielberg-y movie. The failure here is in choosing to try to make two movies at once; it’s the smashing together of these two incompatible approaches to the unknown that makes Super 8 so difficult to just sit back and enjoy.
(Image: the film’s decidedly VERY Spielberg-y poster, which I love)
I recently stumbled across an article on Wikipedia about a little bit of Usenet slang from the 90s that has more relevance now than it did maybe even then: the “eternal September.”
The story goes that, every September at universities around America, a bunch of new students would arrive on campus with really no idea what they were doing. Those students would bring their general cluelessness with them when they signed on to Usenet, one of the earliest internet social networks.
As a result of this influx of new people on Usenet’s multiple groups, whatever social norms that had become entrenched there over the course of the prior year seemed to disappear around September of every year. Around that time, new users became the loudest voice, and the whole thing looked less like an organized social network and more like anarchy. Individual Usenet groups became a mess of people who had very little idea how those services worked and how to politely join the conversation. In short, around September of every year, Usenet looked more like a YouTube comment thread than an organized social network. (video possibly NSFW: language)
But around 1993, according to that Wikipedia article, the internet morphed into a thing that new people started using every day. That meant that the anarchic disregard of internet social norms was happening constantly. New people were continually joining these services, and there was always a rabble that just didn’t know how to behave in those groups. (Ignoring, for a moment, those that purposely disregard these rules for the lulz.)
That’s what led a guy named Dave Fischer to write that September 1993 had never ended, that Usenet would forever feel like it was another anarchic September. The rabble of new users had overtaken Usenet’s ability to instill social norms, and internet etiquette was a thing of the past.
This is so fascinating to me. The most obvious plain meaning of the phrase is that internet etiquette is basically no longer a thing. The ever-broadening scope of the internet means that only very small subgroups, i.e. individual subreddits, forums, etc., have any ability to actually instill etiquette in their participants, and even these small subgroups face a constant September of clueless and etiquette-less new users. And the eternal September effect means that something like YouTube, with such a giant user base, can’t ever really develop specific sets of norms or etiquette because the September turnover is constant. There’s no persistent user base that can enact the social norms with any efficacy. The September has gotten so long that new users are the norm, not mannered pre-September users. Hence, YouTube comment threads.
But beyond that, this “eternal September” effect could also be understood as the philosophy that creates a certain kind of elitism-based humor on the internet. For instance, there’s at least one blog dedicated to showcasing individuals that think articles from The Onion are real news stories. That blog essentially highlights the fact that there are constantly new people finding The Onion that have never seen it before. It’s all about The Onion’s “eternal September.”
That’s true of Twitter (see, e.g., the old man who thinks Twitter is a search engine), Facebook (see, e.g., this woman who mistook Facebook’s status update box for a search box (possibly NSFW: very embarrassing search)), and even iPhone’s autocorrect feature (see generally people that are too new to autocorrect to know that they have to check their work). Presumably, the people who are doing these things would stop doing them after acclimating to Twitter, Facebook, and the iPhone. But this “eternal September” means that content will never run out for these types of blogs, because new people are constantly just starting to use these social tools and, therefore, just starting to use them wrong.
And the “eternal September” shows no sign of going away. If anything, it’s eternal-ness is only now becoming apparent. Because the “eternal September” applies not only to new users of established technologies, but also to new technologies. Or maybe more accurately, the increasingly frequent arrivals of new technologies (and updates to old technologies) mean that there are limitless opportunities for new September users. You can see these Septembers happen every time Facebook changes its layout or a new social networking platform arises.
“Eternal September” is a description of the lag between the first adoption of a technology and its widespread use, a description that applies to pretty much every technological development (no matter how small) and pretty much every person at some point. The phrase is coded with scorn for those that are slightly behind, the same scorn accompanying the term “n00b”. And it’s a particularly cruel scorn, because it isn’t early adopters making fun of late adopters, it’s early adopters confronting slightly-less-early adopters. This kind of early adopter can be notoriously elitist, and this is a tool for the barely-earlier adopters to hold their perceived superiority over an ever-larger group of people.
And that’s why September of ’93 is longer than even those original elitist Usenet denizens could have guessed: it’s a tool for enacting that ever-present elitism. At its core, the “eternal September” is a mechanism for those from last September to make fun of those from THIS September, all the while blissfully ignoring the fact that, maybe days ago, maybe hours ago, they were also just arriving.
Today is apparently destined to be a really important day in the history of intellectual property law in the modern age. Two things happened that impact the public as consumers of art and media, and when seen in the context of each other, these two events highlight what might be the real problem in getting the law to match with what these consumers actually value.
I’d be willing to wager that if you’re here, you noticed today that Wikipedia (and countless other sites) went dark or changed their usual look to protest something called SOPA. (Interestingly, the references in this post are nearly all to Wikipedia, so those won’t be visible until presumably after midnight.) SOPA (and its cousin PIPA) is very complicated, and you should go learn all about it. But something you might not have noticed also happened that is a little arcane and tangential to the more pressing issues of internet censorship and modern copyright. But you should still know about it. So here’s the rundown.
Some time long ago, America signed onto a complex international agreement so that we could get the benefits of other nations doing what we wanted with respect to copyright law. But we didn’t really do the same for them; for years, international works sold in America had shorter copyright terms than American works sold here.
The result was that a bunch of these international works fell into the public domain before they would have had they been afforded equal treatment under our laws. People started using them free from copyright restrictions (not a TON of people, but it’s kind of unclear exactly how many works, let alone important ones, are affected).
Anyway, America one day decided to get their act together and start doing what they were supposed to have already been doing under their international obligations. So, they extended the length of protection for these international works up to the same length as their American counterparts.
But what, you might ask, of all of those works that fell into the public domain before they were supposed to because of our lapses as an international partner? Aren’t the people who created these works being deprived of what is rightfully theirs (i.e. a number of extra years’ worth of profits)? Simple, says congress: these works would now be pulled OUT of the public domain and become protected again!
This, on its face, could strike you one of two ways. One, you might say that it’s obviously fair for international works released in America to get the same protection as American works released at the same time. Anything else would be unjust and might also be a symbol of America’s hubristic visions of superiority, the same visions that allegedly have started wars and ruined our economy. In short, this result could be seen as an important step in America’s realization that we need to be team players with all of the other people on this planet.
But then there’s the other view: this new law stands for the proposition that congress can take public property and reclaim it for private individuals if they think it’s necessary (and not even necessary, maybe just rationally related to the government’s interests). I recognize the scariness of this, and I do feel that this is a bad sign. But not for the reasons you might think.
Obviously, this issue is complex, and I think both of these approaches to what is happening here are legitimate. But at first blush, I think it’s worth recognizing that if congress had taken seriously America’s promise to the rest of the world, we’d never have had this problem. These works would just still be under copyright law.
But that’s beside the point. There’s a more important fallacy buried here. The idea that this ruling finally removes the sanctity of the public domain is a confusing one. It implies that there was any sanctity in the first place. In this instance, the international works that fell into the public domain did so accidentally, not because of some sort of altruistic principles. In fact, they fell into the public domain because of America’s greed and attitude of exceptionalism. That’s not exactly the purity that some critics of this ruling would wish were at play.
Essentially, the problem of this constriction of the public domain is a problem of our own making, and not even a copyright problem, but a POLITICAL problem.
The truth is that Congress is the one who could fix this by actually fixing copyright law and making it clear that the Court’s interpretation was wrong. But, instead, because Hollywood pays the bills, they only make copyright law worse.
And that’s important. This case, and cases like it, stand for the principle that the constitution doesn’t limit congress’s power to delineate the boundaries of the public domain as long as they are doing so to reasonably achieve their goals. We can’t expect the court to fix copyright; it’s just plain NOT THEIR JOB. And we apparently can’t trust congress to do it, because we can’t trust congress to do ANY of the things we think are important.
That’s the heart of the matter. Today’s decision is not the court abdicating its duties as the paragon of good copyright policy; it’s the court reiterating that it’s not their job to make policy at all, so we need to get our congresspeople to make good policy.
And that’s also what is at the heart of the SOPA and PIPA protests.
SOPA-protest-like mechanisms and mentalities to fix things like this, not our faith in 9 people in robes.
This case isn’t a disaster of copyright policy, it’s a civics lesson. I hope we’re all paying attention.
So. A lot of really interesting things have been going on. From day to day, it’s hard to tell which stories are going to actually be important in the long run, so I try to resist the urge to blather about everything I find interesting in a given day (that’s what the SBO News Tumblr is for). But it’s been a while, and one story has surfaced as having a seemingly lasting importance. It’s the story of LulzSec. Some call them nefarious hackers, and others call them vanguards of a new way of thinking, white-hat jokesters exposing weaknesses without doing too much lasting damage. The truth is (surprise!) more complicated.
There are a lot of angles from which to approach this story, but I’d just like to highlight some of the misconceptions that the public and the media seem to have about what LulzSec actually did.
For starters, people act like LulzSec did something unprecedented by exposing all of this private information. And while that’s partially true (in that they are probably the most organized effort to do what they did), it’s also a kind of misdirection regarding what it is they actually exposed.
Here’s what I mean: to me, LulzSec exposing the weaknesses of networks of information is not that different from the series of Facebook privacy mistakes that exposed increasing amounts of personal data. There just isn’t that huge of a difference between having your private information exposed on the open web because you wrongly trusted Facebook’s default privacy settings and having your login information displayed publicly because you wrongly trusted Sony’s encryption policies.
The (already nearly-forgotten) Anthony Weiner story is actually a pretty good example of this. Weiner was just a normal guy who didn’t understand how Twitter’s architecture protected (or didn’t protect) his privacy. This led to his junk ending up all over the internet. Should we cut him more slack than, say, the FBI, who’s protection of their website was easily circumvented with some simple hacking scripts? Weiner exposed himself (haha) the same way that the FBI did: because they didn’t understand how the technology they relied on worked.
And that’s what’s really at stake here. Network technology has become centrally important to our everyday lives, but it’s also become increasingly sophisticated. And we have a duty to understand that sophistication.
Then again, no matter how high the stakes are, we can’t pretend that changing technology hasn’t created high risk before. There existed a time when people uniformly left their front doors open, a time when having credit in a store just meant telling them your name. Those are technologies (doors, loose credit systems) that have became outmoded for their purposes (keeping intruders out of your home, keeping tabs on your purchases).
And that happened because people exploited those technologies; they stole from homes and used false names for credit. The unsophisticated and ineffective nature of these types of systems was exposed, requiring better systems. That’s how these things have always worked. And that’s how they’ve worked with LulzSec, too.
But here’s the thing: the people that took advantage of the system and displayed its weaknesses in those cases were called “criminals,” not jokesters or revolutionaries or white-hats. Having a high-minded reason for stealing and trashing things doesn’t save you from consequences. Maybe LulzSec deserve the criminal treatment quite a bit more than they deserve the white-hat treatment.
Now obviously it’s more complicated than “you’re either a criminal or you’re a sheep.” I wrote a while back about WikiLeaks, which I sort of praised for wanting to change the way information is kept by governments but also sort of criticized for the dangerous way they are going about creating that change. I’d say the same here: I’m all for people using better passwords and companies using better crypto and more secure networks. But that doesn’t mean I’m a fan of giving out huge amounts of personal information about otherwise innocent bystanders.
This whole thing is even more confusing when you try to come up with off-line analogs. Imagine a band of jokers wandering around suburban neighborhoods and stealing valuables from homes without alarm systems just to prove how vulnerable these houses are. This is not how social change is made. This is how moderately smart people get their jollies at everyone else’s expense. That is what is happening here, possibly not much more.
Though even that analogy breaks down when we realize that LulzSec isn’t really hacking deeply sophisticated servers. They’re hacking websites, the public-facing, loosely-protected internet billboards for these companies.
For example: not too long ago, LulzSec took down the CIA’s website. But the CIA doesn’t keep its secrets on its website; the CIA’s website is likely slightly less secure than, say, the Huffington Post. It takes very little work to steal the furniture off of someone’s front porch, but it takes more work to steal from their safe. LulzSec basically only stole porch furniture, even if it was the kind of porch furniture we’d rather not be left out.
The bottom line is that the whole LulzSec situation demonstrates the imbalanced interaction between our understanding of our own technology, our expectations of privacy, and our desire to trust the companies that hold our information. That’s the same imbalanced interaction that was exposed by the Facebook privacy flap, the Anthony Weiner fiasco, password phishing scams, and every privacy crisis in internet history.
And the solution isn’t angry prosecution or sting operations. The solution is trying to understand these interactions better. Technology isn’t likely to entirely outmode the social contract any time soon. We still have to make our society work. And only more education and more understanding will make that happen.
I know this isn’t usually the place for news. So, I’ve decided to start a blog that IS a good place for news. Surprise! It’s the Stars Blink Out Tumblr. Check it out! And if you tumbl, maybe reblog some stuff!
Yeah, I know, not exactly breaking news. But I know you don’t really come to this blog for “news.” Instead, as you might expect, I have something to say related to how people responded to this news, specifically on the Internet and on online social networks (surprise!).
The always-enlightening Gabfest crowd discussed generally our new-found societal inclination to publicly declare our personally felt sentiments. The argument is that we now live in a society so fixated on authenticity that everyone now feels compelled to share their feelings on this momentous event publicly and immediately, no filtering.
The Gabfest’s major misstep is evident in the final moments of the segment: they essentially finish the story with each of them saying that they didn’t do this themselves, but everyone else did, so it’s a reflection of a cultural force. If it IS a cultural force, why are they immune to it?
I think understanding our obsession with authenticity as some sort of uncontrollable urge to share our feelings is to misstate what social networking actually does accomplish here.
It’s certainly true that online social networks make it easier to reflect authentically our own feelings to our friends. But the online social network can do only that: facilitate the offline social network. In other words, the only people who take to Facebook or Twitter to publicly share their emotional reactions to bin Laden’s death are the same people that were disseminating these sentiments through their own offline social networks before these websites even existed.
The result is that, while it might look like people are having an unprecedented emotional response to some global piece of news (be it joy at the death of an enemy or shame at the public celebration of a person’s death), that emotional response is essentially the same as it has always been, just more visible.
The real novelty in this situation is not that more people are sharing their opinions; it’s that more people are seeing each other’s opinions. Back on September 11th, 2001, for instance, I could only get the reactions of those people that I saw around me on a daily basis. And believe me, they were vitriolic and extreme and numerous. But they were limited in number by the amount of people in my social network that I saw on any given day.
All online social networks have done is expanded the functional, accessible size of this social network, making these opinions LOOK more common, even though they are as common as they always have been.
But online social networks have also, to a certain extent, democratized the response to situations like this. Offline, the people with whom I correspond most regularly and sunstainedly are those that tend to agree with me. That is the nature of friendship. But online social networks make friendship something a little more broad. A more diverse group of people now have access to my attention, people that I do care about but I wouldn’t have heard from in a previous era of information sharing. Essentially, instead of getting the somewhat limited viewpoints of those friends that I already most closely agree with, I get the diverse perspectives of the broadest circle of my friends.
Authenticity is at war with artifice every day. We want to authentically represent ourselves, but we also wear slimming clothes and make-up and only say the things we think won’t disrupt or offend those around us. Maybe some of us maintain less distance between impulse and action, but in the end, we shape our actions to what we want those actions to be, not some deep sense of who we are. (Sure, what we want our actions to be is influenced by that deep “who we are,” but even if the animus is deep, the agency is at a higher level.)
That war between impulse and control, between authenticity and self-definition still exists online. Online social networks have not achieved some unprecedented level of authenticity in social interaction; they’ve achieved an unprecedented AMOUNT of social interaction. The nature of that interaction is essentially unchanged, still as authentic or inauthentic as it always has been.
In my opinion, that’s probably more useful. How much do we desire a society where people say whatever is on their mind all of the time? How much to we desire pure, unadulterated authenticity? I’d argue that the authenticity that we now have access to is the more useful variety: people can easily, quickly and accurately represent how they see themselves, not necessarily what they objectively are. That, to me, is the bedrock upon which social interaction is built. I’m glad it’s the kind of authenticity brought out by such an ambivalence-breeding event like this one.
I love a story that combines really fundamental issues about how people apprehend meaning with the complexities of anticipating how our own technology will impact our cultural future. And no story combines these elements so elegantly and so surprisingly interestingly as the story of the Department of Energy’s 1991 waste isolation report, as reported by Slate.
First, a brief summary of the problem the plan anticipates, as reported by the article: our nuclear waste and nuclear materials are going to last longer than us. That’s just a fact of the chemistry of these materials. These hazardous materials will remain hazardous long after the possible collapse of all of society, or even the death of all man-kind.
So, in an effort to protect future human societies (and possible non-human ones) from the waste, we’d have to find a way of labeling this material as hazardous for a people whose language might look nothing at all like ours and whose society is entirely unpredictably organized.
The solution hatched by Sandia Labs, in a report commissioned by the Department of Energy, is a surprising but sensible one: hire a bunch of people that are experts at conveying information symbolically to come up with some immediately-recognizable sign or some information transfer mechanism to alert future societies of the hidden dangers we have created.
For those unfamiliar with it, semiotics is a branch of philosophy that deals with symbols. It’s a study that seeks to explain how symbols indicate other things, how that indication is created, how the brain dives through layers of symbols almost automatically, and all of the different ways these symbols are manifested.
So expert semioticians are essentially people who are experts at how things MEAN other things. It makes sense, then, that these are the people hired to devise something lasting and language-independent that indicates danger to any observer.
The solutions they propose are just mind-bendingly clever. One proposal: build a lattice of sharp, dangerous looking rocks on top of the waste, discouraging exploration of the area. Another plan calls for building giant stone structures with pathways through them that are too narrow for people to set up camps and live there, thus discouraging settling in the polluted area.
Some rely on more complex systems not directly linked to the symbols themselves, but to how symbols gain meaning. One such proposal is the setting-up of a priestly class of sorts that would know of the dangers of the nuclear sites and would transmit this information in a form more akin to religious dogma than to scientific learning.
The whole discussion smacks of junk futurism and conspiracy theories, like Project Bluebook or a set of secret orders for the president on how to deal with an alien invasion. The difference is that the problem anticipated here is essentially a certainty, something guaranteed by the physical laws of the universe.
This is a forward-thinking approach to something that is essentially a predictable result of our current actions. We’ve created dangerous waste that, as long as it is on this earth, is dangerous to humanity for generations upon generations to come. The waste already exists. It is something that we KNOW will exist for a predictable time into the future. We’re just attempting to mitigate against its ill effects.
I don’t know how well the idea of an atomic priesthood is going to work. But I really do love the idea of landscapes constructed to be difficult to live in just to warn people off from nuclear waste sites. What if the darkest, most uninhabitable depths of the ocean are actually created by a long-dead advanced civilization to hide the technologies that became their very undoing?
I know it sounds like an INSANE stretch, but this plan seems to suggest that this scenario might be the reality of distant-future generations.
(Image from the original report, depicting a “menacing earthworks” approach to deterring people from disturbing a nuclear waste site.)
Those of you that know me (and those that don’t but have been reading my posts for a while here) certainly have seen that I’m not a fan of modern copyright law. I think it’s too complex to work, too restrictive on first amendment rights, and generally gets used in a way that is anti-art, not pro-art. But that’s only the first version of myself. You probably also know that I’m not a copyright abolitionist or copyright-basher. Version two of myself thinks that copyright is necessary, and it can be used reasonably and in a huge variety of ways to actually make the world of culture a lot better.
Now if I were solely that first version of myself, I’d look at a story of an artist doing something weird with copyright law and I’d say “AHA! Copyright is broken! This is endemic of the deeply flawed system!” But for this critique of a recent story involving Lady Gaga, I’m going to be entirely that second version of myself. The tech and law blog Techdirt recently posted a story that’s all about how Lady Gaga’s recent actions betray just how horribly flawed copyright law is, which is a story that the first version of myself would praise the hell out of, but the second version of myself is just too riled up by the whole thing to let that happen.
The article suggests that, if we look at how Lady Gaga uses copyright law, we can see just how broken copyright law is. The article asserts that Lady Gaga uses copyright in a way that does not at all match with the actual reason for copyright law’s existence. Copyright law is meant to incentivize creation of new art, and the article says that Lady Gaga’s attempts to use these laws for herself show just how far from this original goal the actual uses of copyright law have strayed.
Specifically, the article cites two major examples: Gaga’s recent suit against “Baby Gaga” for the use of her image and her brand, and her treatment of photographers at her concerts, specifically that she requires them to sign agreements that give her copyright in their images. Let’s take these one at a time, then talk about why the whole endeavor of criticizing Gaga’s use of copyright law is actually really deeply flawed, even more flawed than the actual modern copyright system.
So the article only mentions in passing that the Baby Gaga thing is probably not copyright. But that’s really important, so let’s not conflate. Gaga sued on the use of her name and on the use of her personality rights, things like her sensibilities and her style. I don’t think anyone’s arguing that Lady Gaga doesn’t have the right to control her image and her brand, which are the EXACT TYPES of things that trademark and personality rights are meant to protect. In other words, the Baby Gaga suit is not an example of Lady Gaga’s twisted understanding of copyright law, it’s a sign of her ACCURATE understanding of trademark and personality rights law, two fields of law that are actually surprisingly sensible compared to copyright law.
The slightly more sticky example is the photographer contracts. I don’t like what Gaga is doing with these, but she’s certainly within her rights to do it. Those contracts include terms about how they can use the photos, something that’s pretty NORMAL for photographer agreements. These photographers sign agreements when they go to her concerts, so it’s not like she’s affecting their first amendment rights or something: they are essentially her employees when they contract with her.
The bottom line is that if she wants to put limits on the scope of these photographers’ agreements with her, they still have to AGREE to those limits if they want the access she’s agreeing to give them. They give something of value up and receive something of value in exchange. If they want to retain copyright of their images, they should photograph a different event, let someone who doesn’t care about who owns their art become Lady Gaga’s shill for that gig. This is a contracts and competition issue, not a copyright one.
The point of the Techdirt article is essentially that copyright has morphed into something terrible because people like Lady Gaga use it in unanticipated ways. But most of the unanticipated ways they list here aren’t even copyright related: they’re contracts and trademark related.
But let’s not forget the real reason that copyright law is structured as it is, with lots of very small things declared the rights of the artist. It’s designed to control the use of an artist’s work, no matter what that art is and no matter what the use is. It’s supposed to be flexible in the direction of rights-holders, ideally artists. And this flexibility is in place to allow for emerging markets.
Here’s what I’m saying: if the purpose of copyright law is to incentivize art by creating ways in which artists can control the use of that art and therefore profit from it, then isn’t allowing an artist who’s show is a spectacle worth seeing the ability to contract with photographers carefully just another way of incentivizing creating these kinds of shows? Isn’t Lady Gaga just taking advantage of one of those incentives with this kind of deal, not going against the incentive-based intentions of copyright law?
That’s not to say that she’s making a GOOD move or that she’s doing something that is good for the legal landscape of art (she probably isn’t). But she IS doing exactly what copyright law would have her do: she’s monetizing her art using controls on distribution. It’s what the founders would have wanted.
(Image: Lady Gaga Screen Print Painting, a CC-licensed photograph of a copyrightable screen print painting, probably a non-licensed derivative work of a surely-copyrighted, duly licensed image of Lady Gaga. IT’S COMPLICATED.)
It’s that time of year again. That time when we remember romantic love, and how glorious it can be. Where we send cards to our loved ones explaining how unqualifiedly wonderful they are. There are no “If you would stop snoring you’d be perfect” cards or “I wish you were more self-confident” cards, only “I Love You” and “Be Mine.”
Yes, it’s Valentines Day, the heart-shaped box of treacle that so oversimplifies the complexity of relationships. And that can be kind of nice, enjoying the simple things, remembering the good, and celebrating people we care about. But when we start to unpack that heart-shaped box, we start to see the cracks in the veneer on this love-fest and the complicated troubles of this yearly remembrance.
The trouble starts when you consider the origins of this holiday. Because “holiday” is a laden word, and it’s not clear if it applies to Valentine’s Day.
The event first started as a classic Catholic saint’s day, a day reserved for remembrances of the holiest Christians and how they (usually) gruesomely gave their lives in martyrdom to the cause. In St. Valentine’s case, no one really knows what happened to him, but it’s pretty clear it probably had nothing to do with love (interestingly, because of the uncertainty around the story of St. Valentine, his official Catholic saint’s day was removed from the calendar in the 60s).
Somehow, this religious observance morphed into a celebration of romantic love. It started as far back as the 1700s, and British hand-made valentines were popular throughout the 1800s, but the whole practice turned a corner into mass-production and commercialization at some point.
The blame is usually cast on the greeting card companies. The term “hallmark holiday” was invented for Valentine’s Day. These companies had finally created a wholly novel celebration of romantic love, which led to years and years of cards, commercials, movies, and television, filled with plastic portrayals of what is ostensible a very dynamic and heated emotion.
The whole Valentine thing smacks of historical disconnect, exaggerated sentiment, and irrelevance. But when we look at the cultural reaction to that disconnect, instead of seeing a wall of uniform disdain, we see something pretty varied and complex.
On the one hand, a lot of people still really like this holiday. Aside from couples that always make a big deal out of the holiday, there’s still that universal grade school experience of making valentines for your classmates (in my school, we had to make one for each student in the class, but anecdotes from others would have me believe that some schools allowed a little bit of selection, and therefore pre-teen heartbreak). Maybe that experience catches some of us and carries over to adulthood, because there’s still a pretty solid market for Valentine’s Day candy and cards.
There’s also the yearly Valentine’s episode, a staple of most television shows. By no means are these specials all good, but they are ubiquitous, expected by audiences, and even looked forward to by some critics. For better or for worse, our culture is one in which the mainstream has embraced February 14th as a day to celebrate candy, hearts, pink and red, paper cards with superheroes or puns on them, and, not least, love.
But let’s not forget that there’s a tremendous amount of backlash against this holiday. Of all of the holidays on the calendar, it’s the one people most love to hate. Mother’s Day, an equally invented holiday, is pretty universally seen as a good opportunity to thank our mothers, not as the crass commercialization of a complex relationship (even though it basically is just that, to the same extent as Valentine’s Day).
Maybe that’s the cultural power of this holiday. Valentine’s Day is, if nothing else, a versatile holiday. Getting together with your single friends to get drunk doesn’t sound like a romantic evening, but it IS a celebration of the holiday. People celebrate by burning their ex’s stuff, or by drinking wine with friends, or by watching action movies to rebel against the whole thing. Even those that love to hate Valentine’s Day still are getting some serious utility out of its existence.
But the list of hypothetical V-Day activities does seem to focus a lot on the ample dark side of the holiday. I think NPR’s Pop Culture Happy Hour said it best when they said that Valentine’s Day tends to have at least some negative emotional and social effects, no matter what your situation is. The unhappily single person is reminded of their single-ness, the new couple is reminded of the complexity and pressure associated with serious relationships, and even stable, long-term couples still sometimes run into mismatched expectations over the holiday.
On St. Patrick’s Day, everyone is Irish. Valentine’s Day offers no such out: single people remain single, unhappily married couples continue to be unhappily married, and gay couples remain marginalized and unable to marry.
Romantic relationships are complex, but Valentine’s Day is, at its heart, a holiday celebrating simplicity. To that end, those that revel in the simplicity of the whole thing (television shows, the rare adoring couple that gets SUPER into it, greeting card writers, jaded V-Day rebels, etc.) can revel in this holiday. But any reminders of the underlying intricacy and incomprehensibility of romance make this holiday empty and galling.
So from all of us at Stars Blink Out, where we are dedicated to highlighting the complexity in even the most simple situations, have a strange, confusing, complicated, crass, and maybe a little sweet, Valentines Day.
If you are at all interested in copyright law and new technology’s effect on innovation, then this article will give you chills. As Anton Ego put it in the classic pro-innovation manifesto, Ratatouille, “The world is often unkind to new talents, new creations. The new needs friends.” And it is with great pleasure that I report how upliftingly a District Judge in Los Angeles embodies this notion. I’d like to briefly summarize what the case is about, then talk about why the things this judge said are so exciting.
The defendant in this case is charged with breaking digital mechanisms that protect copyrights, in this case, the Xbox‘s controls on what kinds of games can be played on it. The defendant developed a way to hack into the Xbox and play pirated, non-officially-licensed games on his system. Arguably, the main purpose of the hack is to let illegally copied games run on the system. But our defendant argues that there are a lot of non-infringing uses for this kind of hack, including developing new technologies for the machine and for playing your own legal back-up copies of your games, to name just two.
So this case went to trial, and while there have been a lot of cases about reverse engineering technologies and hacking them (the semi-recent iPhone jailbreaking rules, for instance), this is the first about the Xbox. Very exciting, but also potentially dangerous.
Because the judge presiding over the case could happen to really likes the rule against circumventing these technologies, maybe because he thinks that protecting large companies that develop these technologies is more important than letting tinkerers break their machines open and try to innovate. If a judge like that presides over the case, then we remain where we have been since the Digital Millennium Copyright Act was put into action in 1998: copyright law prevents a very important (in my opinion) type of innovation.
But lo and behold, the judge in this case is not the kind of pro-DMCA hard-liner that some of us were afraid of. During opening statements just a few weeks back, the presiding judge, Philip Gutierrez, realized that the prosecution’s case had some problems. He pointed out, as the article linked above says, problems with witness credibility and with the prosecution’s characterization of the defendant’s intent. Even more importantly, the judge reversed his earlier decision to remove a fair use defense from the defendant’s arsenal, essentially saying that the law must allow some experimentation on this kind of technology.
That is key: according to this judge, allowing tinkering, home-brewing, and hacking is IMPORTANT, and anyone who does it is allowed to try to prove that they did it with good reason, reason more important than the arbitrary strictures of the DMCA.
It also signals a big step forward in how judges think about these issues. To a certain extent, the prosecutors made all of these mistakes in this case because they thought they could get away with it. And if they got a judge like a lot of the other circuit judges out there, who maybe don’t understand the role of hacking in innovation, they WOULD have gotten away with all of this. It’s supremely uplifting to see a judge making it clear that you can’t just rely on judges liking your policy aims to win cases against hackers; when you want to curb innovation, your case better be pretty strong.
(Brief notes: First of all, since this incident, the prosecution decided to dismiss the hacking incident, essentially giving up, for now, on trying to prosecute this kind of thing. Victory! For now. Also, for a great overview of how this kind of hacking works, check out famed Xbox hacker Bunnie’s overview.)
Gawker Media is a pretty large, pretty influential blogging network, which includes a lot of different types of content. They are responsible for Lifehacker (a sort of productivity blog / DIY hub), io9 (a place for sci-fi nerds), Gawker itself (a sort of gossip / politics tabloid-blog?), and many more. Essentially, they’ve become a platform for a certain type of content. So their choices design-wise not only indicate the way the Internet has been heading, but they also influence the future of other sites. So here’s some stuff about the most recent redesign.
(Incidentally, you can scroll to the bottom of this post to read a brief disclosure about my relationship with Gawker if you are worried about my journalistic integrity. Short story shorter: I’ve freelanced for them, but that shouldn’t matter here.)
First, a brief overview of what has happened. As you can see at this post and in the video there, the list of posts is on one side, organized with most recent first, and the content is on the other side. The new set-up also gives Gawker a way to highlight interesting media and pictures, not necessarily the text of a given post. In short, they’ve redesigned to emphasize interesting visuals and information, not necessarily in-depth writing.
Which is fine! That’s sort of been Gawker’s model for a while. The in-depth writing is an added bonus on top of what is essentially a collection of tabloid-y news scoops, oddity roundups, and short tips, highlighted by eye-catching media. That’s what it does, and it does it extremely well, in addition to the occasional in-depth writing pieces.
Anil Dash, Internet genius and trend-analyzer (and more!), has a lot to say about this redesign, including a roundup of other commenters speaking out. He’s right on the money when he says that when this is the kind of information you want to put out there, this new set-up is exactly what Gawker needs. You should go there to read more, but here’s a little snip:
In this way, blogs are emphasizing the trait that’s always defined them, the fact that they’re an ongoing flow of information instead of just a collection of published pages. By allowing that flow to continue regardless of which particular piece of embedded content has caught your eye, Gawker and Twitter are just showing the vibrancy and resilience of the format.
But I just wanted to add one more possible thought to this whole jumble. Another reason why Gawker can afford to do a design like this is that they’re already famous. From a search-engine-optimization standpoint, this would be a weird choice. Only a site with a devoted audience, a clearly defined niche, and a built-in expectation for quality can afford to have such a busy front page with only one actual textual piece on its front page. A start-up blog would have to think very differently. It’d have to have a LOT of text on its front page and make a lot more effort to welcome new readers.
Dash is basically right on point when he says that this marks Gawker borrowing from the design of web-based applications like Twitter, mostly because web-apps don’t have to advertise themselves on every page like blogs do. But maybe the better way to think about it is that all web-based information or media platforms are all starting to prioritize the same kinds of things, much like cable channels slowly did over the course of their development.
In the end, we’re headed to a different version of the same place that we always do with this whole Gawker thing. Gawker is an established brand, a trusted news aggregator, and the internet is dividing itself into fewer and fewer recognized platforms for this kind of thing, with the independent blogger / startup personal brand having a more and more difficult time making an impact. Essentially just as television operates now.
If we think about what makes the Internet special, this would still preserve a lot of its strengths: the easiest platforms for making an impact (YouTube, for example) are those that will more fully develop and become popular, and those platforms will still allow interesting things to happen. But I think we’re kind of past the days when new platforms can become giants. I have a post brewing in my head about the difference between networks, platforms, and applications in the world of media, but that’ll have to wait. For now, I think Gawker’s new design is a hint of the implications of this platform-centric approach to Internet media.
(Brief disclosure: I sort of work for Gawker. I write for their sci-fi blog io9, and they pay me, but as a display of my limited involvement, I heard about this redesign from Anil Dash, not from my ties to the company. I’m basically a long-term freelancer for them, so I have absolutely nothing at all to do with big decisions like this redesign or mission statements or anything. As much as I believe that my ties to the company have not influenced this post at all (since I am writing generally about structure and the purpose of Gawker), I’ll leave it to you to discount what I have to say if you disagree.)
Space Canadian Chris Hadfield continues his quest for interplanetary internet dominance with this incredible experiment submitted by two Nova Scotia high school students: Kendra Lemke and Meredith Faulkner
They wanted to know what would happen if you wrung out a washcloth on the ISS? I won’t spoil the ending for you, but suffice to say it’s about the coolest thing I’ve ever seen.
I love how he doesn’t even have to hold the mic. Great job, Kendra and Meredith! For science!
theadamglass: it8bit: A cover of “This Charming Man” by The Smiths done in the style of Super Mario Bros. Created by lazyitis Yesyesyes. A new challenger for the title of “BEST THING EVER.”
(No streaming audio available)