What is a Perceptual Browser?

My work is about storytelling by time and place – not so much the stories themselves, but more importantly to me, how data tell stories. I think of data as the most basic output of recording devices – from which meanings are derived. I think of stories very broadly – as matrices of data connections between people, times, places, things, events, etc. – they are fact or fiction, prose or poetry, implicit or explicit, they are how we know and understand.

I’m developing a perceptual browser, based on a metadata time-and-place taxonomy, for streaming data into information, knowledge, and understanding. I’m calling the browser TimeAndPlaceStoriesTM, and I consider it an understanding engine.

Pathways to knowledge and understanding begin with perceived data. Most of our knowledge, though, is not the result of directly-perceived data. Most data from primary sources are mediated before we receive them. That is, they pass through communication media, and are often edited, analyzed, evaluated, filtered, interpreted, assimilated, processed, reprocessed, replicated, remixed, synthesized, compiled, or otherwise transformed into what we call information – for us to apply as knowledge, in order to achieve and advance understanding.

It seems to me that the shortest path from perceived data to knowledge will afford the greatest opportunities for optimal understanding.

Vast amounts of directly-perceived data are available in many forms for us to browse today. Unfortunately we, individually, do not currently have the tools to process vast amounts of data into information, apply information to generating knowledge for ourselves, and to create and advance our own understandings. Here is where my perceptual browser comes in.

A perceptual browser is about………

Perceiving

When we experience something firsthand, we perceive it directly through all of our available senses. Being multisensory, direct experiences are rich and informative. That’s why people attend concerts, climb mountains, chase storms, go to the ends of the earth to experience things firsthand. That’s why people endure struggles and achieve exhilaration. Direct firsthand experiences yield firsthand information, knowledge, and understanding – and vivid memories.

No one person, of course, can experience everything firsthand. Most of what we know and come to understand is the result of information we receive secondhand, thirdhand and beyond, through myriad tunnels of communication media. Sometimes decades or generations after the fact.

Some events, times, and places, of course, can never be experienced today. Events that happened in the past (think of WW2) can be perceived only through the stories that original remaining source material reveal or through stories told by those who were there when it happened. And places change over time (think of the Acropolis) and can be perceived as they were only through the stories that original source material from various times reveal or through stories told by those who saw it at various earlier times.

In order to help ourselves remember our own direct experiences (recent and distant past), and in order to share experiences with others (at the time or later), we document them in various ways. These data documents tell stories. We’ve been documenting experiences and telling stories since before the beginning of recorded history. Over the millennia, important innovations and technologies (think of the alphabet, movable type, and computer) have advanced the telling of stories. New devices and services for preserving and sharing experiences and memories are proliferating exponentially these days, resulting in greater-than-ever numbers of available records of direct experiences. More data, more recordings of data, more stories. More compilations and configurations of records, more stories. Stories replicate, aggregate, and multiply. Most stories are based on a limited number of direct experiences, original perceptions, or original source material. Yet, this is how we know and come to understand most things – through retellings and interpretations.

Even with today’s most advanced technologies, though, multisensory direct experiences must be “flattened” (for lack of a better word) into records that typically are perceived by only one or two senses (usually sight and sound) for storage, transmission, replication, replay, etc. (Think photos, words, phone conversations, emails, texts, etc.) Yes, entertainment media are always extending the bounds, creating “richer experiences” – but still; those “stories” are specific time-sequenced visual and/or audio tellings and retellings of experiences. Thus, our perceptions about people, places, things, events that we do not or cannot directly experience are mostly through aggregated or composed stories about them (that is, they are not the rich firsthand multisensory directly-perceived experiences themselves). And each individual story is, for the most part, from one point of view or a limited number of points of view, at a specific time or during a specific time span.

We receive a lot of information (actually too much, as in information overload) these days from stories – but how much more could we know and better understand about what is actually happening in the world (or at least in the lives of those we care about) if we could shorten the path from data to knowledge? What if we could bypass redundant, extraneous, irrelevant, and distracting information that we’re not interested in, and could directly access more data from more points of view ourselves about what we’re interested in? What if we had the perception tools to scan vast numbers of unmediated direct experiences that we could filter, analyze, and edit for ourselves, instead of relying on proprietary access to data and retellings of information?

Let’s consider, for example, an event that was perceived directly by many people from many points of view, which generated a great deal of information and many retellings of information – the so-called “miracle on the Hudson”. On the afternoon of January 15th 2009 at approximately 3:29 US Airways Flight 1549 glided into the Hudson River in New York. At 3:30 people began to Twitter about it. At 3:31 major news media broke into regular programming to begin reporting. At 3:35 AP began covering it. At 3:38 the first blogs appeared. At 3:40 the US Airways press office learned about it from a news network. Within the first three hours after 3:29 that afternoon there were several thousand Tweets, more than 1,500 videos on YouTube, more than 270,000 Google indexed pages, and countless blog entries. Major news media were soliciting photos and videos of the incident for their stories. In the meantime, the airline and the government were trying to catch up with the stories already out there, so that they could begin telling their own stories. And, as they say, the rest is history.

This well-documented large-scale example illustrates how directly-perceived data are aggregated into information, and how story replication processes begin. Like this example, almost any richly-informative multisensory directly-perceived event – on almost any scale these days, from a child’s first birthday to a rock concert – will generate some number of data recordings. These flattened recordings of firsthand experiences seem to get even flatter with “push” retellings over time (whether on a social network or a news network). On the other hand, direct “pull” access to original data recordings by time and place would yield richer perceptions and more points of view of any event or time or place. Imagine having the tools to browse (then and now) all original perceptions (data recordings) between 3:30 and 6:30 on the afternoon of January 15th 2009 along the Hudson River…………

Browsing

We begin looking for information (about the past, the present, or the future) by browsing – leisurely, casually, purposefully, randomly, skimming, looking, listening – in stores, museums, libraries, magazines, conversations, online, with a TV remote, a keypad, a touchpad, a tablet, a phone, with a question or curiosity. We browse for work, play, recreation, entertainment, amusement, etc. Browsing might lead to gathering information, gleaning, extracting, broadening or narrowing a search. Browsing might be an end in itself, or a point of departure, a beginning of a journey or a story.

Rapidly advancing technologies, changing cultures and lifestyles, the flattening of the world, difficult economic times, and a host of other factors are changing the ways people browse. Public services and institutions are cutting back. Public libraries are being outsourced. Books are read on screens. Phones are being used more for texting than talking. Social networks are becoming communication media. Traditional media facing declining audiences are looking for new advertising models and are struggling to find ways to integrate new media. Same goes for traditional retailing. More and more media are competing for less and less of everyone’s attention. There is a lot of noise out there. Yet, whether looking for products and services or information and knowledge, browsing from anywhere on the planet is getting way better.

Browsing via a direct experience in a real place, like a store, museum, forest, or attic, is usually multisensory. Browsing remotely via a book, magazine, phone, computer, or any other device, accessing a virtual place, is usually by way of only one or two senses.

Profound advances in information gathering and searching will occur as browsing in real and virtual places continues to evolve and merge through GPS, mobile phones, mobile tagging, and numerous location-aware devices and technologies. Barcodes, ShotCodes, etc. in real places can lead information seekers to online information. Satellite GPS navigation systems can lead location seekers to real places. Online location-aware mobile devices can connect people, places, and events. Browsing in real and virtual places will become increasingly seamless as communication, information, and knowledge systems converge. Similarly, sensory perceptions will merge as direct and remote browsing converge.

Most browsing for information is, of course, remote, because it’s efficient and people can’t look in all real places for information. Most remote browsing for information these days is, of course, online because it is most convenient and accessible. Most online browsing for information, images, music, videos, maps, news, shopping, etc. begins with words. (While it’s possible to search for a song by humming a tune, and it’s possible to find images based on form characteristics, these technologies presently have limited applications. And with gesture technology making its way to market, maybe someday we’ll be able to do pantomime searches.) For the foreseeable future though, we do keyword searches. But finding information online with words is not always easy.

Words can have various meanings depending upon context. There are many people, places, and things with the same names. Finding multiple points of view or multiple sources to confirm information is not always easy because of the infinite number of possible combinations of words that are used to index information. Resolving differences among various information sources or interpretations of data can be tedious. Word browsing results depend upon search algorithms, search engine categorization, page rankings, patience, and persistence. And word browsing takes a lot of keystrokes and clicks.

Relevancies of word search results often depend upon when specific information was created and where it came from. Or perhaps more importantly, when and where the data came from that the information is based on.

Again, imagine a shorter pathway to information about the events of January 15th 2009 along the Hudson River – that is, having direct access to original data recordings from that afternoon, browsing for data by time and place, as well as browsing for information with words……….

Browsing and Perceiving

Whether real or virtual, firsthand or secondhand, browsing and perceiving are currently limited, one way and another, by time, place, and the precision or imprecision of words. Even one’s own immersive and richly informative direct experiences are series of perceptions from one point of view during specific time spans. How much richer might (real and virtual) browsing and (multisensory) perceiving be if – in addition to one’s own direct experiences – one had direct access to directly-perceived data, from all recording devices, in all media, that address all of the senses, from many points of view, of any event or place over any time span?

And how might one browse all those perceptions?

A Perceptual Browser

TimeAndPlaceStoriesTM is a perceptual browser. It is a Web browser of sorts. I refer to it as a perceptual browser because it affords unfiltered and unmediated direct access to sensory perceptions of any and all direct experiences that are recorded (in all media) and shared – in other words, the raw data, not replays, remixes, interpretations, edits, compilations, or reprocessed information.

Thus, a perceptual browser affords access to all points of view of all peoples, places, things, and events that ever happened and are happening. All points of view are arrayed by time, place, and subject metadata tags. (The rationale for and value of metadata time and place architecture are described in the storytelling, architecture, and development links to the left.)

Data from every recorded observation or experience could be thought of as “story elements”. Story elements, located by time, place, and subject tags, recorded from many different points of view, may be composed by story readers. In a return to collaborative storytelling, “our stories” would yield richer meanings and greater understandings than “his-story”.

(I suppose story element would become “stoel” or “stel” as an acronym. Not quite the same ring as pixel, but then, perhaps pixel sounded odd back in 1965.)

Someday all cameras and other recording devices will have radio clocks that will always be set to the correct time (they will become as smart as phones). Someday all cameras and other recording devices will be location-aware (like phones). Someday all cameras and other recording devices will be able to subject-tag recordings (as some software is now becoming able to do). When those technologies become fully integrated into all recording devices and all recordings are automatically metadata time-stamped, place-stamped, subject-tagged, and uploaded, new metaphysical relationships will emerge. (Then, perhaps someday a 3D perceptual browser will read cloud stories in a meta-connected world. More about that in future postings.)

In the meantime, we can begin to move that direction somewhat two-dimensionally with extant technologies, devices, and the Web as we know it today. TimeAndPlaceStoriesTM is a perceptual browser that reads metadata. More specifically, it reads time, place, and subject metadata.

All files (documents, photos, messages, etc.) include metadata – that is, data about the file data. Ordinarily we don’t see much metadata in most application programs. Operating system file browsers show some metadata. The mostly-hidden metadata that accompany all files can yield a great deal of information. As technologies advance, as recording devices become smarter, and as metadata standards and format specifications evolve, metadata will become increasingly richer and more useful sources of information about files.

A powerful feature of metadata encoding is automation. Any digital recorder that is part of any device (a microscope, computer, camera, telescope, etc.) embeds metadata into files automatically. Automated and accurate time, place, and subject tagging that is controlled by devices and do not depend on human settings and controls, is less likely to be affected by human omission or error. (Remember when you couldn’t set the time on a VCR? Remember VCRs?)

Importantly also, some software programs allow adding metadata manually. So, with adequate validation criteria and verification controls, digital recordings of any artifact, or legacy analog recordings of anything, may also be identified by time, place, and subject tags in metadata – broadening areas of historical inquiry in fields like, for example, the digital humanities. Tagging like this is being done today in various proprietary relational databases (private, subscription, and public) with controlled protocols and vocabularies, and specified fields and search terms – again, words. This requires a hunt-and-peck method of finding anything in any number of databases, but it’s a start.

With automated and accurate time-stamps, place-stamps, and subject tags in metadata, a browser that reads metadata could find any shared data file from any time, from anywhere, about anything by anyone in a single universe of files.

But wait, there’s more!

Up to this point, I’ve been generally describing how we perceive and browse the world outside ourselves. We also perceive and browse our own worlds – the documents we create, the dialogs we have with others, the data and information we find in the world outside ourselves, that we accumulate, aggregate, bookmark, save, etc. for future processing. We have mental schema for organizing data internally, and file structures for organizing data in the myriad devices we use every day. Our perceptions and the ways we browse our outside, inside, real, and virtual worlds are becoming increasingly seamless as we upload, download, synchronize, etc. As we process data into information in order to communicate and know and understand and make decisions (and yes, to be entertained), we interact physically, intellectually, and emotionally with many different devices all day every day. And without being especially aware of it (except when we’re frustrated by it), we are also  interacting every day with many different operating systems, proprietary software, ways of perceiving, and browsing tools (in phones, computers, cameras, etc.) that have varying degrees of compatibilities and synchronicities.

We should be able to perceive and browse our accumulated and stored data and information – our stuff – with the same mental schema and file structures that we use for interacting with the outside world. It seems to me that a useful way to organize our virtual lives would be the way we organize our real lives – by time, place, and subject. A simple concept, but a daunting challenge today considering the myriad devices and operating systems that we use to interact with the world, and considering the myriad proprietary virtual and real places we visit.

My TimeAndPlaceStoriesTM perceptual browser will organize and synchronize our created, accumulated, and stored data and information with the same time-and-place schema and file structures that we use for interacting with the outside world. Not only the same, but seamlessly integrated with our real-life schemas. As a perceptual browser, TimeAndPlaceStoriesTM tells stories by panning, zooming, and filtering in the same ways that we actually think about things.

In the context of my perceptual browser development, I’ll be writing a few short essays in this area of my site about how perception relates to storytelling points of view, change, deception, time-based media, and perhaps other topics. In ways that we do not yet know, the ideas of Plato, Einstein, and others, about perception and reality will be advanced and shaped in our emerging digital world. I envision that the direction of my storytelling work – that is, how data tell stories – will in some ways contribute to this exciting time and place journey…………