December 22, 2003
Past. Present. Future. (Take 2)
Here we go again, half-surprised at still being here and, like 12 months ago, rich with words of gratitude.
Wish you all peaceful days, spent doing what you love with who you love.
See you all in 2004.
Posted by fabio sergio | 4:56 PM | permalink
December 18, 2003
Cross-town linking. Setting the stage.
New derailed thinking patterns over at IDII's Hub.
"Setting the stage" deals with the space taking shape where electromagnetical waves embrace.
After IDII's Hub disappearance I've re-published my original contribution below. Enjoy.
Setting the stage
Consider the moments that lead to a conversation mediated by mobile phones.
Punching a few buttons, waiting for a dull bi-tonal reply, finally communicating, hopefully with whom we were originally trying to reach and not with a synthetic voice of some sort.
This analog-phone-like experience is quickly morphing, shaped by new cultural habits and by spontaneous/induced desires to customize every single mobile touch-point.
A few examples first, to coat ideas with a veneer of context.
In Japan, where social politeness plays a much more important role than it does in any western country, mobile phone use has given birth to a new habit dubbed "the knock-knock", which involves sending a short text message to the person you desire to contact before actually disrupting his/her time-space continuum unannounced.
This far-east custom's evil twin is the crude but effective habit of filtering incoming calls by looking at the caller's number first, quickly translated on new phones into a "divert all non-phonebook calls to voice mail" feature, and now spreading to text-based communication as well.
Another interesting sign o' the times is the success of personalized ring-back tones: instead of hearing a recursive, nondescript tune your callers will wait while enjoying your music of choice.
Finally there are videocall-enabled phones, with the need to "check-yourself in the mirror" both before making and receiving videocalls, a brief stint of time during which you can quickly use the screen to self-assess if you deem yourself presentable for the upcoming personal narrowcast...or if you'd rather press the video-mute button.
End of context.
What do all of these habits, features and services whisper into sensitive ears?
I'd be tempted to say that there's a space taking shape where electromagnetic waves embrace.
In the past, when phones used to be wiredly nailed to a wall, calling someone usually involved knowing them being in a specific context, whether at home or at work.
Because of the mobile nature of today's connected devices both parties recognize that any attempt at communicating will potentially drop on the unsuspecting "victim" with highly disruptive effects, thus all of the aforementioned active or passive efforts to ease the transition of both parties into a shared space.
All in all I'd be tempted to call this virtual handshake a "setting of the stage" of sorts, a few seconds during which bells are rung, doors are opened, credentials checked and mutual, silent agreements established.
Think about it as "the other side of presence", rather than letting you know I'm busy/offline/happy we'll both work to create a buffer zone where we'll feel comfortable communicating.
Why does this matter to interaction designers?
Easy: how can we harmonize all of these separate moments into a cohesive, pleasurable experience?
How can we help define this new space, also considering that in our Instant-Messaging era the need for buffering and stage-setting is likely to ramp-up exponentially, for example when it comes to new IP twists on old voice tricks?
To make things more complicated: what if the conversation ceases to be two-sided and becomes one-to-many-to-many?
If Marco Susani, Roberto Tagliabue and Federico Casalegno's "Mapping Communication" project came to mind then we are on the same brainwave-length.
Bear with me as I close this entry with one more (derailed) thought.
I would argue that all of these "dead" moments, silently rhythm-ing our communication patterns, are liminal in nature, "...thresholds or transitions from one state or space to another.".
We are "filling them up".
Did anyone think "horror vacui"?
So did I.
Posted by fabio sergio | 12:05 PM | permalink
December 16, 2003
Technology for forgetting.
Steven Johnson's latest article, "Offloading Your Memories" (might require registration), deals with an old pet-peeve of mine, the increasing ease with which we can all document our lives as they unfold, to publish the recorded information in real time or to retrieve it at a later date, for public and personal use.
Other than Johnson's article, new food for thought on the subject was also recently provided by "Memory and Storytelling", Session 6 of IDII's symposium, and by the ensuing coffee-break conversations with Mike Kuniavsky and Tom Erikson.
The key that opened a new(ish) door is the concept that "memory is a mediated experience", introduced by Giovanna Leone (and others) at the end of the aforementioned session.
This idea translates into the living, social nature of memory, which is not a static, univocal recollection of faces, places, events and dates, but instead a constantly evolving narration.
The key difference between the past and the future is that in the "old days" (now, that is) the passing of time usually involved a signal vs. noise dynamic unbalance, with noise levels usually raising along the time axis.
In "unassisted" memory creating/sharing processes there's a medium-related loss of quality associated with the information being passed that enables modification of that same information over time, thus creating the conditions for filtering and parsing events into collective agreements of what actually happened.
We are all myth makers, as recalled by Federico Casalegno during the symposium, we re-shape the past both at an individual and collective level.
In this sense memory is a "mediated experience": we think we remember something, but that something is actually the result of a process that involves re-telling, and thus re-living, events over and over again.
Time is ripe for a question: how will digital, life-capturing tools play a role in this process?
A practical example might help to clarify things a bit.
Mr. A and Mr. B happen to meet, and the conversation leads to a heated discussion, which degenerates into a full-on verbal fight.
They separate, angry with each other and both believing to be on the "right" side of the debate.
After a few days they hook up again, matters having cooled off and all, and they talk about the incident, re-living the discussion while trying to clear things up.
The inherent fuzziness of their recollection helps in dumbing sharp edges down, as we have been proven to remember positive things better and negative things less clearly, and in the end they agree on a common explanation of the argument, thus creating the possibility for their relationship to evolve around the event.
What is important here, though, is that what actually happened matters as much as what they mutually agreed happened.
The final experience, mediated through their second conversation, has the opportunity to change from negative to positive, leaving clarification in place of contrast.
All's well that ends well, right?
Let's now introduce a variable in the scenario above: Mr. A has digitally videotaped the discussion with his glasses of true vision.
How will this "absolute" reference impact the second round?
My guess is that the memory-related fuzziness that would have allowed Mr. B to say things like "I didn't precisely say that" and "you said this, so I said that" will be gone.
In other words there will be simply less room to maneuver for both of them, less room to mediate experience into memory.
Due to the timelessness quality of the digitally-produced artifacts, which potentially shine as new forever after they've been first created, Mr. A's descendants will still be able to hear (and judge) Mr. B's words and attitude.
Take this one social magnitude level higher and what you get is a society unable to let go of its past's tiniest details.
To somewhat wrap things up and glimpse into the future we could simply look at all the various forms of blogging and at what they have already introduced today.
Blogs are for all purposes shared memories, near-daily accounts of actions and thoughts made public to the rest of the listening world.
From humorous takes to much more serious consequences blogs have already proven they can have significant short and long-term effects on people's lives.
If today I comment negatively on a specific product/company/brand on these very pages, if I speak badly about somebody, how will that matter in the long run, if I am to apply for a position in that company, or talk to that person?
If indeed markets are conversations, how will tools that store them and never forget ultimately have an impact on both?
Are we heading towards an über-politically correct world, where we'll be forced to always ponder all of our words for fear of getting quoted 20 years from now?
Will the true/false DNA of digital artifacts finally inform a future devoid of the room for doubt?
These questions are (part of) the reason why with everyone apparently fascinated with ways to remember I find myself toying with the idea of "technologies for forgetting".
As quoted during the symposium "forgetting is a virtue, not a sin", and there's nothing wrong with letting go of things even when they are highly valuable, as Buddhists have long been showing.
All in all we are facing a future strung tight between the ideal, pacific world of the Memex, where man will be given "access to and command over the inherited knowledge of the ages", and one where Lenny Nero will feel at home, characterized by our collective inability to let go of our past.
I keep hoping (and working) for the first scenario to become our future, but recognize it will require active involvement from everyone, driven by ample awareness of what's at stake.
Deep down inside I can't also help but think that this apparent obsession with recording and storing can be somewhat seen as western society's last-ditch attempt to fend off horror vacui.
The interesting thing is that we might find ourselves successful.
Posted by fabio sergio | 6:18 PM | permalink
December 11, 2003
I've come to realize that my neurons are few, lazy and unapt at processing data in real time, which makes the info-foraging-digesting process a rather lengthy affair.
All of this as an excuse for a horribly-late coverage of IDII's Foundations of Interaction Design Symposium (extensive notes to be found on IDII's Hub, starting here).
All of this also to say that what follows should not be considered a detailed account of what was discussed during the two signal-heavy days, just what yours truly's small brain finally mastered to somewhat distill out of the whispering data cloud.
I'd sum the 8 sessions up with two visibly absent concepts/keywords: "boundary object" and "relational".
The basic idea behind the symposium was apparently simple.
Interaction Design lacks any agreed-upon, "formal" theoretical framework, as it constantly borrows and tweaks tools and techniques taken from many (neighboring) disciplines, so why not bring together recognized thought leaders from the various fields, both academics and "professional practitioners", and have them talk about their specific domain's state of the art, to share ideas and directions?
The end result was an effective, if a bit babelic mingling of terminologies and methodologies, which did not necessarily lead to an understanding of similarities, but, even more interestingly, to an understanding of common differences.
Bear with me as I digress for a minute.
Science still mainly follows a process of specialization, recursively applying to itself the basic rules that shape its very practice.
Just as each scientific domain has dissected its area of interest into smaller and smaller bricks/bits (think atomic particles and human genome project), so has each scientific discipline narrowed its area of interest more and more.
While places like the Santa Fe Institute have been long trying to look for unifying theories, the bulk of the scientific community still thrives on knowledge atomization, pushing micro-level understanding of phenomena.
Incidentally this model has slowly permeated every knowledge domain, and is not foreign to the market as well, where every niche opened gets quickly filled with look-alike products and services geared to exploit newfound opportunities.
In my humble opinion this over-specialization, especially when it comes to academia, was evident in many speeches and even while chatting during coffee breaks between sessions.
Most speakers came with razor-sharp understanding of the ins and outs of their disciplines, and were often left puzzled by the "incorrect" usage/approach taken by others when it came to discuss what they considered to be "theirs".
And this is where boundary objects come into play.
During the symposium terms like "activity" or "memory" were often used by various speakers and commentators, with widely different meanings and nuances.
The resulting "confusion" was possibly a cause of concern to some of the participants, but definitely not to interaction design practitioners.
In other terms the "purity" and unambiguity so common in scientific/academic circles belongs not in a discipline as complex and multi-versed as interaction design, where these boundary objects abound.
After the symposium I am even more convinced that we still actually need some level of confusion around terms and practices to shape new, fluid vocabularies of practice.
And here's where the second keyword, "relational", comes into play.
Many presenters addressed their proposed theme by looking (again) at the bits, but few talked about how their bit interrelates with the other ones.
The (IMHO delusional) talks on "emotion", for example, looked at it as if decoupled from all the other experience components, hoping to somehow measure it alone.
If I've learned a thing during the last years is that any monolithic approach to knowledge is done and gone with, and unsurprisingly this is all the more evident in a discipline basically born out of environments based on open-ness and decentralization.
The term "co-configuration", introduced by Irjö Engeström during the second session, should thus apply not only to the relationship between designers-at-large and the users of their products-at-large, but to the very relationship between interaction design and the disciplines it draws its lymph from:
"Co-configuration requires flexible knotworking, no single actor has the sole, fixed authority.
The center does not hold."
In other words interaction design might be facing the difficult task of breaking new ground when it comes to formalized knowledge practices, and we could we be looking at a hypernodal model, where value is to be found in the connectors, not in the nodes.
I'd be tempted to say that no project epitomizes all of the above better than Victor Vina's Box research project.
Atomized features connected to create a whole that's bigger than the sum of the parts.
Built-in open-ness, decentralization, flexibility, adaptivity.
Design disappearing in the definition of potential relationships among the boxes, made visible once again by the interaction between the system and its actors/users.
Stable structural principles, compositional freedom.
Interaction Design as a practice defined by the (information) architecture of its afferent disciplines?
One last half-formed thought, inspired by Walter Gerbino's wonderfully creative talk on "Presence and Visibility", which introduced the concept of "a-modal completion":
"... our 3D world is densely populated by invisibly-present entities.
A-modally completed surfaces and volumes are the rule: generally, they depend on filling-in processes supported by properties of visibly-present parts."
In other words, the ability of our brain to fill-in missing information when it comes to visually interpret complex, 3-dimensional, compenetrated objects.
I couldn't help but combine the concept with one of my take-away key-terms and (re)assemble an "a-modal relational completion process": how to identify, and thus be able to recreate, the way in which we mentally model the invisible relationships between the various atomic components of an (interactive) experience.
How do "the people formerly known as users" put together the parts that "can't be seen"?
More food for (future) thought.
Posted by fabio sergio | 4:47 PM | permalink
December 05, 2003
Medialab Europe's tunA (see also Wired's take on it):
"tunA is a mobile wireless application that allows users to share their music locally through handheld devices.
Users can "tune in" to other nearby tunA music players and listen to what someone else is listening to.
Developed on iPaqs and connected via 802.11b in ad-hoc mode, the application displays a list of people using tunA that are in range, gives access to their profile and playlist information, and enables synchronized peer-to-peer audio streaming.
Can the walkman become a social experience?
Can anyone become a mobile radio station?"
I know a few friends who might be interested...
Posted by fabio sergio | 3:54 PM | permalink
December 02, 2003
Sweaty cheap philosophy time at freegorifero.
Valeria and I have been stretching in a friend's gym on Thursday nights, which works wonders for our desk-bound joints.
While stretching, the body reaches a point where it readies itself for pain.
Muscles tighten, in a desperate instinctual reaction, as if to fend off self-inflicted aggression.
And here's the catch.
If you consciously force yourself to relax at that very point, pain comes later.
Not that it doesn't come, mind you, but you discover you can simply stretch further.
Accepting pain. Letting go. Stretching.
I'll leave it up to you to draw parallels with other situations where this might apply.
Just another one of my (in)famously useless moments of enlightenment.
Posted by fabio sergio | 11:19 AM | permalink
December 01, 2003
First grid::blogging instantiation.
Other voices here.
Ever since Ashley posted his manifesto/invitation a question has taken shape at the back of my mind.
Has grid::blogging quickly become a brand?
The answer, I tend to believe, is positive.
There's clearly a mission statement, with implicitly and explicitly expressed values, there's a name and emerging signs of a symbol/logotype, albeit in an early stage of evolution.
The grid::blogging brand has already got equity, and even an image, I'd argue.
In other words grid::blogging exhibits quite a few of the basic ingredients that characterize a brand and its identity.
And thus comes the simple question: why is it so?
Why wasn't it enough to simply say "hey, let's all agree that on a specific day we'll all write about a common theme"...which is in the end at the kernel of what we are doing today?
Why did Ashley feel the need to christen the initiative in such an ear-catching manner, why the manifesto, why our enthusiastic reaction?
My feeling is that a lot has to do with branding's increasing pervasiveness, in our info-cluttered world, at unprecedented granular levels.
The term "brand" originally only referred to a way to distinguish things, and had nothing to do with all the much-debated "owning the mind, heart, loyalty and lifestyle of the consumer" hoopla that has given branding-at-large its current negative vibe.
I'd be tempted to say that thanks to a medium built on open-ness and decentralization today the original meaning of the term brand has re-surfaced at the micro-level, (optimistically) devoid of many market-driven implications characteristic of the macro-level.
All things considered blogging is in itself a collectively individual branding exercise: everyone pushing his/her own brand called you, whether consciously or not.
In this sense grid::blogging could be seen as a (brand) new type of brand, an ad-hoc array of freely aggregating micro-brands, gathered not around a product or a service or a desire, but around a project.
A hybrid, stretched between an aging top-down world and new emergent forms of interaction shaping new value chains.
Posted by fabio sergio | 7:32 PM | permalink
Not bored yet?
Visit freegorifero's weblog archives for more useless words.