Archive for the ‘2.0’ Category


Tech Trends

November 28, 2007

I’ve been doing a guest lecture spot for Consumer Studies students this week, and this year’s talk is on “tech trends”.  In order to  make it more than a cool gadgets show-and-tell I’m dealing with tech trends sort of in the context of discussing the type of people who can spot trends and then use them to forecast … and why these are the type of people that I, and many others, are looking for when they hire.  Someone out there referred to the people as possessing “entreprenurial character” and I have adopted that as my own term now.  In brief, these people are passionate, smart, learners, self-motivated, re-invent their jobs, thrive in chaotic environments, curious, and are able to spot paradigm shifts.  I try to be one of these people, but invariably I can’t keep up a few of those characteristics to the level I might like.

Anyway, the good part is that I also came up with an example to demonstrate the entrepreneurial character in action.  Google pretty much always has many projects on the go and to most people it appears as though they come up with a series of random cool things to build, and then throw them on their page of offerings. To a person who can spot trends and then put them together to form a picture of the future, these projects are not random at all, but pieces of a puzzle.  Currently Google has the following on the go, mystifying many:

1) Their pay-per-click program:  If someone clicks on a google ad, the business that they go to pays Google a small fee.  Billions of ads a day, and 1 in 1000 people clicking equals a ton of revenue.

2) Many of these businesses are really starting to feel that they’d like to be sure that these visits are actually resulting in sales … return on investment.

3) G-Pay: … a project to be able to pay for products using your phone to text message the funds to someone.

4) Google local: A system to use your phone to check for the best possible price for a product in you local area.

5) Google phone:   There was much speculation that Google was going to put out a phone.  This would be odd for them , since they never actually manufacture anything, and don’t tend to enter markets that already are established and have strong competition (unless you count search engines)

6) Wireless spectrum auction: In February a chuck of the wireless spectrum is going up for auction.  There is much speculation that Google is going to spend billions to own a frequency. Why?

7) Google talk:  Google has a voice-over-IP system … I don’t know anybody who uses it.

So, see if you can put this together.  What is Google’s plan here? … They know how to make money, and they generally break the paradigm wide open when they do something big. Are these a collection of separate inventions, or is there a grand scheme at work here that will all come together down the road?

Here’s the scenario … largely provided by an entrepreneurial character in India who I stumbled across.

Anyway, it goes like this.  You’re out shopping in the future. In your hand is a wireless device (of any type) with the Google operating system downloaded onto it.  This device (could be a smart phone, Zune, iPod touch) runs on the  Google chunk of the wireless spectrum for free and uses Google talk to send messages (or just text-messaging).  You’re shopping for an HDTV at BestBuy.  You use the device to fire up Google local and look for the best price in your region, and it turns out that you go to Costco once Google tells you to. When you get there, you use G-Pay to send the funds from your wireless device use text-messaging to the device of the guy standing behind the counter.  Since Costco knows that you were pointed there by Google, they know that they are getting a return on there investment, Google pay-per-click is working for them, and they give a small portion of their sale to Google.

Google has made the transition from making money on web advertising to making a chunk of money on every retail transaction they can get their hands on …. they’ve jumped from the web to the “real world” in terms of pay-per-click and they get a piece of just about everything.

Will this happen?  Who knows, but it sure does sound like a good idea, and all of the pieces are falling into place.

That’s entrepreneurial character (and why Google recruits so many PhDs), and figuring that out is another example of the same thing.


I suspect that people will never “get it” …

September 19, 2007

So, another story has popped up that demonstrates that people do not understand the Internet, even though they live on it. In a somewhat bizarre but not terribly surprising way, it seems that web advertisers are poised to sue those who create ad-blocking software. For those who don’t know, it is possible to install plug-ins for your favourite web-browser that will block ads. This is a good thing … you may simply not like ads and wish to minimize their appearance in your life, but you may also be aware that half the time when a page loads slowly it’s because your browser is trying desperately to download an ad from some far-off third-party server. So, people have made plug-ins to help you, and many just no longer see ads. This has caused the makers of ads to decide that they should sue the makers of ad-blockers for damaging their ability to get revenue from the web … to me, this is the very definition of bizarre.

Here’s why: this type of thing seems to go in never-ending cycles. People who find a way to make money seem to feel that they have a right to transfer that money-making method to the web. The music industry is a ridiculously obvious example of this. Because they control the means of distribution in the “real world” they believe that they should also control the means of distribution on the web. They find out, of course, that the web just doesn’t sit still and listen to them ranting and raving. They find out that lawsuits and cease and desist orders work for about five minutes. The motion-picture people continually send cease and desist orders to the PirateBay bittorent site ( demanding that this site no longer provide people with the means to download films … the music people send orders to telling them to stop providing access to recordings of live music … lawyers get paid a lot of money to stop from giving away music. In all of these cases, the site in question mockingly posts the order, calls the lawyers a bunch of names, and reappears completely intact (if they go away at all) in another location. Etc, etc.

What is interesting to me about this is the fact that the people chasing these sites seem to have no understanding whatsoever of the nature of the Internet. The Internet was not an orderly thing that got into the hands of bad people and was turned into a haven for criminals. The Internet was designed to be this thing. The Internet was intentionally made to be a decentralized system that did not have authorities, and could not be choked by some central traffic regulators who spotted something bad and wanted to stamp it out. Tim Berners-Lee, the closest thing to an “inventor of the web” has made this very clear in his initial documentation regarding the nature of the web, created at Cern, back in the early 90’s or so. The web was to be a decentralized system with no one node ever becoming capable of being a choke-point. It was a model that relied on the reason of the players, saying that the web should be “tolerant”, allowing people to experiment, but still demanding that they follow certain protocols, and that it be decentralized, so that one bad part can’t bring the whole thing down. It must also be modular, so that one part can be changed without changing all of the parts. It is amazing to me how successful the designers were in creating this system, and how it continues to function to this day. It also amazes me how people like advertisers and those who sell things just can’t grasp this. Just because your advertising system works today, by no means guarantees that it won’t be entirely irrelevant tomorrow. Just because you’ve invested a lot in a system doesn’t mean that it won’t be bypassed for something cooler tomorrow … too bad, you can’t purchase the right to control the means of distribution in this system. In a way it is a truly fair playground, where ideas win because they’re good … in another way, it is so different from the physical world that it causes real problems for people that can’t move quickly and want to control the system as they have so effectively done in the past. The most interesting thing is that there is no point in judging all of this … it is what it is, and success comes from seeing the situation clearly and dealing with it, not demanding that things change to suit your conception of how they should be.


A few things brewing …

April 25, 2007

Went to a library event called “Digital Odyssey” last week to hear a number of people speak. I was quite interested in just hearing what the relatively new Chief from MacMaster had to say, and that was quite interesting. (on a completely different topic … to write this I just switched over to my Powerbook from a PC … the keyboard on this thing is beautiful after the PC … it just feels so perfect!) Anyway, I went in thinking about a recent webby application I’d become interested in, and a number of topics merged. There was one talk on putting web 2.0 features into a library catalogue and something about it was really rubbing me the wrong way … it was good work, but there was just something wrong. It struck me that this project, by implementing rating systems, comments, and such was focusing too much on the discreet end points of searching. i.e. – they were collecting searches, and they were collecting the items (and reactions to them) at the other end. This is all fine and good, but I’ve been focusing lately a lot on the idea that research is not just entering a search and receiving results. It’s also more than just matching like people with like results and preferences. Research, in fact, is a process, and it’s a complex and often social process. The bit that’s missing in this focus on search beginnings and endings, is the “trail” that takes up the middle, and the trail is extremely important to the person seeking information.
I’ve thought about this quite a bit and discussed it with scholars. The most obvious manifestation of the trail is following citations. Once a person reaches a certain point in their education, they realize that the endnotes and bibliography of any article can be almost as important as the content. These elements of a paper show you who is an influence on the author, and how the author has constructed his or her argument. Many times I have followed citations and created a “trail”, and I know that my research is nearly complete when I begin to see the same sources being mentioned repeatedly … at that point I begin to feel like I have a grip on the community of people working on this topic and that I have “covered the ground”. The trail can also certainly be a social circle of actual people, and I suspect that people utilize these trails without even realizing that they are utilizing a technique (you know that the guy down the street has had some great work done on his house, so you ask him about the process of getting renovations done, he tells you to contact so and so, etc).

Anyway … trails. As I mentioned some posts ago, this fellow Vannevar Bush undertsood the importance of trails way back in 1945 when he speculated about information technology. I think that we have gone through a period in digital information use that has lost the utility of the trail to a certain degree, as we each do discreet searches, find some stuff, and then exit Google or whatever, our search history more or less lost forever. The current library catalogue most definitely has no memory, and does nothing to tie together all of the bazillion people who use it … and there’s a lot of information being lost there. Remembering their searches, their results, and their ratings would be something, but to remember their “trails”, and the networks that they create as they move through information would really be something.

There is actually at least one product attempting to put the idea of “trails” into use. the folks at trailfire get the concept and are trying to create a tool to allow one to keep track of trails. They are in the early stages, but it’s quite interesting. They offer a tool that allow you to mark and annotate websites as you search for information and to save these collections as “trails”, and to share them with others (just as Vannevar Bush envisioned). It almost works, and the ability to save collections of websites in clusters with annotations is useful in itself. The “trail” thing isn’t quite there, though, as most of the examples seem to be just collections of un-connected sites … the potential is there, perhaps as they develop this thing it will develop some true power.


E-mail is outta style. Immediacy is in.

March 20, 2007

It had to happen sooner or later. Apparently email is going to be the realm of old people starting any day now. Email had a very long run and will likely take a while yet to die out, but soon it will be as unusual for your average tech-savvy person as the payphone is now. Perhaps I exaggerate, but not too wildly. Seems that people (particularly young ones) like text-messaging so much that email seems awfully formal, demanding a lot of time and overhead (like having to sit at a computer and open a piece of software and all). This ties to my experience with the 15-yr olds … clearly, they’d rather clutch the stylish and darn convenient cell-phone than attempt to hunt down a computer, compose something, then wait for a reply. These people seem to be remarkably social, like to make themselves accessible as much as possible, and to really value immediacy. The same thing seems to be true of MySpace (apparently now replaced by “virb“, a pretty nifty kind of persoanl space thingy), where people are just around all of the time, ready to be be-friended, or chatted with, or whatever. Virb seems to add the latest trend of immediacy to this business … “I updated my page 47 seconds ago”, and you’re a loser if you haven’t updated in the last hour. Perhaps the most extreme example of this (that this old guy knows about), is twitter, which allows you to let the world know what you’re doing RIGHT NOW (and now, and now …). I’m quite serious … on twitter, people just post what they’re doing at that moment, and they make friends with other people who are doing … stuff. Posts might read: “Drinking coffee”, and everyone who lists you as a friend, gets a message (by reading the page, RSS, or text-message) to let them know that you’re drinking coffee. It makes no sense, but a bazillion people (including presidential candidate John Edwards) are doing it. Some of these people have thousands of friends, and they continually receive updates … “So what?”, you, quite reasonably, say. I’m not sure what this all means, but while this micro-reporting on one’s life seems very silly, it also seems to be drawing in lots of participants. (and making email seem downright pre-historic … and rational).




2.0 = Reprioritizing the Qualitative

February 23, 2007

It took me a while to come up with the subject for this post, and it’s still meaningless (and sounds like the name of a bad blog) .

There are many ways in which all things “2.0” (or the semantic web, or the new web, or whatever) are significantly different from previous versions (like 1.0, I guess). It seems that the pace of change, and the degree to which this particular change is simply embedded in the way people behave, has meant that not too many people are really sitting back and pondering what’s going on here. I also think the fact that 2.0 has simply happened rather than being talked about first, and marketed, and forecast, and put in a box so that everyone can see it, has meant that it has really snuck up on people, and most users don’t even realize (or care) that something new is happening. In fact, why should they care? They’re just using things that work … leave the philosophizing to someone else.

Like I said, people point to many different things that make the “new web” different. The social aspect of it seems to be a popular choice, the participative nature of it is another (and there are more). As a librarian, however, it strikes me that there is something else about all of this that is the most significant, and ties all of these technologies together. I was at a demo by a library vendor recently, and I was trying to think of what it is that these people are fundamentally missing when they think about human/computer interaction. They produce large /huge databases of material, and they provide people with the ability to search through immense amounts of metadata and retrieve things. They realize that what they are providing is just matching one’s search terms to identical things in the metadata … this, of course, is not much more advanced than hitting Cntrl+F and finding words in a web page (which is about as old as computers in terms of technology). In order to become more “web 2.0 – ish” these folks most often say “we’ll add the ability to insert comments on records, then we’ll be like Amazon” (I’m being a little harsh, but this is basically what is going on). I then groan inside ….

What is being missed here is more than just comment boxes, or the ability to put a star beside a title that you like (although those are examples of useful 2.o-ish tools). All of the characteristics of 2.0 sites are really, in my mind, trying to accomplish the same thing. That is, they are attempting to reintroduce the dimension of qualitative assessment into the search for information. All of the techniques of collecting comments, allowing for rating systems, analyzing patterns of use, letting people chat with each other, are shooting for the goal of inserting qualitative assessment into search. Currently, the vast majority of library systems have no ability to allow for qualitative assessment of items in the database … they match text-strings with other text-strings, and they bring back lists of like objects. Oddly, the subject heading system may have been the closest thing that we had to assessment of the items, as that did insert an element of analysis of the content of the material into its organization … but I’m not sure how many people ever even glance at the subject headings anymore. (mind you … this being a blog, I haven’t thought this through too carefully!)

Thinking about this was interesting to me because it finally made clear to me why Google is truly in the web 2.0 camp. Most people just think of Google as a search engine … you type in a string and it returns things that should have that string somewhere in it. As I mentioned in realtion to medical libraries (below), Google also utilizes qualitative assessment of web pages when its pagerank algorithm looks at how people link pages together. Linking to something in a web page is a qualitative assessment of the page … you are suggesting that another page is worth looking at. This is extremely 2.0, as the real magic of 2.0 sites is their apparent ability to suggest to you what else you might find interesting.

There’s something else really important … library systems are strictly in the business of organizing information. This seems like a trivial statement. 2.0 sites organize information as well, but the most important aspect of that organization is the fact that they also organize the characteristics of the users of the system and link the two together (seems trivial as well, I suppose). The idea that we need to think about the characteristics of our users and link that to the characteristics of our information is not there yet in our systems, however, and we seem to be very hesitant to take that step. We need to link up like with like when it comes to data, but there is a huge amount of potential being wasted by not linking up like with like when it comes to our users.