What's New in Teams Calling, Meetings and Devices March 2024
Search V Next Final
1. This is a screen capture from the last time Eric introduced my heresy at the IA Summit Redux
2007 in 2nd Life
1
2. For far too long computer science has directed the development of search systems. This is
problematic from an experience point of view because computer science measures success by
different standards than we do. It is no wonder then that search systems have developed with
minimal attention to the user experience beyond the assumed perfection of results relevance and
the appropriate ad-matching. The intent of this plenary is to inspire us all to engage on a deeper
level in designing search experiences that do more than sell products well.
2
3. SES New York 2005: Mike Gehan… explains that engines want the most relevant results, which is hard "because end
users are search nitwits!
http://www.seroundtable.com/archives/001600.html
Too much information
Hosted Websites
•July 1993: 1,776,000
•July 2005: 353,084,187
Individual Web pages
•1997: 200 million Web pages
•2005: 11.5 billion pages – now likely well over 12 billion
•2009: Google announces that its spiders have found 1 trillion URLs found and the Google index is at 100+billion pages
No Silver Bullet Solution
•Language and perception are different
•Some people think women put their stuff in a purse, others a pocketbook, and others a handbag.
•“Animal” is a form of mammal, a Sesame Street character, and an uncouth person
•Over 140 calculations are now used for PageRank valuation and still “gets it wrong a good percentage of the time
•Customers are looking because they don’t know
•Customers no longer know how to construct successful queries
•Search engine intent Is not always “finding the most relevant information”
Cost of finding information according to an IDC April 2006 report = $5.3 million for every 1000 workers
3
5. Woody Allen says that 80% of success is showing up. That is how it works with search engines
also; you have to show up in the index to show up in the results. Here are two screen captures
from my Agency’s website. The one in the upper left is what our customers see. The one in the
lower right is what the search engines “sees.” So, someone using a search engine to find my
agency will not do so because all the spider “sees” is a big black hole.
If you want your customers to find the websites that you design and the interactions and
experiences contained there using search technology, you must have text on the page.
A prophet is not taken seriously in “her” own land.
5
6. Michael Wesch, Kansas State University, has done a masterful job of looking at the many ways in
which digital text is different from print/spoken/filmed text. So, why do we keep treating it as if it
were the same?
If Tim Berners Lee knew where we would be now, he would not have used HTTP as a foundation
for the Web but he did and we’re stuck with it. Since then, we’ve been trying to design around
the limitations without success. Search systems are founded in text. Their capability to index
visual mediums is growing but is based on translation into text.
It is, and will remain so for the foreseeable future, all about text. The maturation of the Semantic
Web will further enhance the importance of content on the page and meaning off the page.
6
7. The search engines are like people who keep buying bigger clothes to hide their weight game.
Soon the client must come as it has somewhat with Google that starts with whether or not the
page is index-worthy.
Google Caffeine: new infrastructure opened to developer testing in public beta (August 2009):
Even cheap infrastructure has its cost limits and Google looks to reaching its limit with regard to
retention of what it is finding out there, likely a lot of “Web junk” doesn’t even make the cut.
Google Caffeine is:
•Faster
•More keyword string based relevance
•Real time indexing – breaking news
•Index volume
Currently, the determination is done by computational math. Who should decide what goes and
stays? Us! We can influence the search engine’s behavior by getting rid of the “set it and forget
it” method of Web publishing. Keep content fresh and current. Check every now and then.
Publish deep, rich context-rich content and tend to it. Not all of it, the most important pieces. Not
all content is created equal.
Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009
System and Method of Encoding and Decoding Variable-length data: June 27, 2006
http://www.worldwidewebsize.com/
7
8. Here it is, the famous, to some infamous PageRank algorithm. This is its most stripped down
state. Rumor has it that the algorithm now has in excess of 27 components. We’ll look at some of
these extensions in a few moments.
Important to note: The PageRank algorithm is a pre-query calculation. It is a value that is assigned
as a result of the search engine’s indexing of the entire Web and the associated value has no
relationship to the user’s information need. There have been a number of additions and
enhancements to lend some contextual credence to the relevance ranking of the results.
When Google appears in 1998, it is the underdog to search giants like Alta Vista and Yahoo! Its
simplified relevance model with the foundation of human mediation through linking [each link
was at that time the product of direct human endeavor and so viewed as a “vote” for the page or
site relevance and information merit]. It is not so much the underdog now with 64.6% of all U.S.
searches (that would be 13.9 billion searches in August 2009- That would be nearly 420 million
searches per day in the U.S. alone)
There are only so many slots in the golden top 10 search results for any query. Am I the only one
who is concerned with the consolidation of so much power in a single entity and is it perceived
power, something we can do something about, or actual power, something that we must learn to
live with?
Comscore Search Engine Market Share August 2009
http://www.comscore.com/Press_Events/Press_releases/2009/9/comScore_Releases_August_20
09_U.S._Search_Engine_Rankings
8
9. Hilltop was one of the first to introduce the concept of machine-mediated “authority” to combat
the human manipulation of results for commercial gain (using link blast services, viral distribution
of misleading links. It is used by all of the search engines in some way, shape or form.
Hilltop is:
•Performed on a small subset of the corpus that best represents nature of the whole
•Pages are ranked according to the number of non-affiliated “experts” point to it – i.e. not in the
same site or directory
•Affiliation is transitive [if A=B and B=C then A=C]
The beauty of Hilltop is that unlike PageRank, it is query-specific and reinforces the relationship
between the authority and the user’s query. You don’t have to be big or have a thousand links
from auto parts sites to be an “authority.” Google’s 2003 Florida update, rumored to contain
Hilltop reasoning, resulted in a lot of sites with extraneous links fall from their previously lofty
placements as a result.
Google artificially inflates the placement of results from Wikipedia because it perceives Wikipedia
as an authoritative resources due to social mediation and commercial agnosticism. Wikipedia is
not infallible. However, someone finding it in the “most relevant” top results will certainly see it
as so.
10. Most SEOs hate keywords. I say that they are like Jessica Rabbit in “Who Framed Roger
Rabbit”…not bad, just drawn that way.
Keywords were the object of much abuse in the early part of the Web and almost totally
discounted by the search engines. With the emerging Semantic Web that strengthens the topic-
sensitive nature of relevance calculation combined with the technology’s ability to successfully
compare two content items for context, keywords might make more sense. In any event, they do
more good than harm. So, I advise my clients to have 2-4 key concepts from the page represented
here. The caveat is that it be from the page.
Topic-Sensitive PageRank
Computes PR based on a set of representational topics [augments PR with content analysis]
Topic derived from the Open Source directory
Uses a set of ranking vectors: Pre-query selection of topics + at-query comparison of the similarity
of query to topics
10
11. Search 2.0 is the “wisdom of crowds”
Now we help each other find things. Online this takes the form of online bookmarking and community
sites like Technorati (social sharing)
Delicious (social bookmarking) and Twitter (micro-blogging) among others. Search engines are now
leveraging these forums as well as their own extensive data collection to calculate relevance. Some
believe that social media will replace search. How can your friends and followers beat a 100 billion
page index? What if they don’t know?
11
12. If machines are methodical, as we’ve seen, and people are emotional, as we experience, where is the
middle ground? Are we working harder to really find what we need or just taking what we get and
calling it what we wanted in the first place?
12
13. 10/10/2009
Developed by a computer science student, this algorithm was the subject of an intense bidding
war between Google and Microsoft that Google one. The student, Ori Alon, went to work for
Google in April 2006 and has not been heard from since. There is no contemporary information
on the algorithm or it’s developer.
Relational content modeling done by machines-usually contextualized next steps.
13
14. There is no such thing as “advanced search” longer. We’re all lulled into the false sense that the
search engine is smarter than us. Now the search engines present a mesmerizing array of choices
distracting from the original intent of the search.
Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009
14
15. Research tells us that searchers are having a hard time navigating the results as a result of the
collapsing of results on the page and contextual advertising bordering the organic results
Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009
15
16. Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009
16
17. Watch out for those Facebook applications, quizzes, etc, Tweets, Linked-in data
Improving Search using Population Information (November 2008): Determine population
information associated with the query that is derived from a population database
Locations of users
Populations that users are associated with
Groups users are associated with (gender, shared interests, self- & auto-assigned identity
data)
Rendering Context Sensitive Ads for Multi-topic searchers (April 2008): Resolves ambiguities by
monitoring user behavior to determine specific interest
Presentation of Local Results (July 2008): Generating 2 sets of results, one with relevance based
on location of device used for search
Detecting Novel Content (November 2008): indentify and assign novelty score to one or more
textual sequences for an individual document in a set
Document Scoring based on Document Content Update (May 2007): scoring based on how
document updated over time, rate of change, rate of change for anchor-link text pointing to
document
Document Scoring based on Link-based Criteria (April 2007): System to determine time-varying
behavior of links pointing to a document ; growth in # of links pointing to the document (exceeds
the acceptable threshold), freshness of links, age distribution of links
deployed as Google Scout
17
18. Microsoft: Launches “decision engine” with focus on multiple meaning (contexts) as well as term
indexing and topic association and tracking
-Lead researcher Susan Dumais at the forefront of user behavior for prediction on search
relevance
-Look to recent acquisition of Powerset (semantic indexing) and FAST ESP (semantic processing)
Calculating Valence of Expressions within Docum0ents for Searching a Document Index (March
2009): System for natural language search and sentiment analysis through a breakdown of the
valence manipulation in document
Efficiently Representing Word Sense Probabilities (April 2009): Word sense probabilities stored in
a semantic index and mapped to “buckets.”
Tracking Storylines Around a Query (May 2008): Employ probabilistic or spectral techniques to
discover themes within documents delivered over a stream of time
Consolidate the plurality of info around certain subjects (track stories that continue over
time)
Collect results over time and sort (keeps track of the current themes and alerts to new)
Track
Rank (relevance)
Present abstracts
Compares the query with the contents of each document to discover whether query
exists implicitly or explicitly in received document
Builds topic models
Document Segmentation based on Visual Gaps (July 2006): Document white space/gaps used to
identify hierarchical structure
18
19. Systems and Methods for Contextual Transaction Proposals (July 2006)
Delivering Items Based on Links to Resources Associated with Results (April 2008): Present links to
resources associated by links to resources presented in results set
Web Activity Monitoring System with Tracking by Categories and Terms (December 2006): Collect
event data from servers (traffic, search requests, purchases, etc), categorize and analyze to
detect flurries of activities (increased interest) associated with a topic, term or category
19
21. The Pew Internet Trust found that over 60% of users trust search engine results to be most
accurate
No one really reads the terms of use agreements that they sign and certainly don’t keep up with
changes in this agreement over time.
The intentions of the application with regard to data collection are not always transparent. What
up with those Facebook quizzes, surveys and games? Where does that information really live?
66% of Americans object to online tracking according to a new from the University of
Pennsylvania and the number increases when the subjects found out how many ways they are
tracked across the web. 84% said it was not okay to be tracked to other websites.
55% of respondents from 18 to 24 objected to tailored advertising
Americans Oppose Web Tracking By Advertisers: Stephanie Clifford: International Herald Tribune
– October 1, 2009
21
22. Some observers claim that Google is now running on as many as a million Linux servers. At the very least, it is
running on hundreds of thousands. When you consider that the application Google delivers is instant access to
documents and services available from, by last count, more than 81 million independent web servers, we're
starting to understand how true it is, as Sun Microsystems co-founder John Gage famously said back in 1984,
that "the network is the computer." It took over 20 years for the rest of the industry to realize that vision, but
we're finally there. ...
First, privacy. Collective intelligence requires the storage of enormous amounts of data. And while this data
can be used to deliver innovative applications, it can also be used to invade our privacy. The recent news
disclosures about phone records being turned over to the NSA is one example. Yahoo's recent disclosure of the
identity of a Chinese dissident to Chinese authorities is another.
The internet has enormous power to increase our freedom. It also has enormous power to limit our freedom,
to track our every move and monitor our every conversation. We must make sure that we don't trade off
freedom for convenience or security. Dave Farber, one of the fathers of the Internet, is fond of repeating the
words of Ben Franklin: "Those who give up essential liberty to purchase a little temporary safety deserve
neither, and will lose both."
Second, concentration of power. While it's easy to see the user empowerment and democratization implicit in
web 2.0, it's also easy to overlook the enormous power that is being accrued by those who've successfully
become the repository for our collective intelligence. Who owns that data? Is it ours, or does it belong to the
vendor?
If history is any guide, the democratization promised by Web 2.0 will eventually be succeeded by new
monopolies, just as the democratization promised by the personal computer led to an industry dominated by
only a few companies. Those companies will have enormous power over our lives -- and may use it for good or
ill. Already we're seeing companies claiming that Google has the ability to make or break their business by how
it adjusts its search rankings. That's just a small taste of what is to come as new power brokers rule the
information pathways that will shape our future world.
http://radar.oreilly.com/2006/05/my-commencement-speech-at-sims.html
My Commencement Speech at SIMS (May 2006)
22
24. A search on Google U.S. show a relevance focus on Wikipedia with a lead off on the 1989
massacre. Video and Image results also focus on this aspect of the search
24
25. Google China shows a different form of relevance with a focus on tourism for the square
25
26. Google France follows the U.S. version lead with the top 10 results dominated by results focused
on the massacre
26
30. Bowman leaves Google
http://stopdesign.com/archive/2009/03/20/goodbye-google.html
“Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41
shades between each blue to see which one performs better. I had a recent debate over whether
a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an
environment like that. I’ve grown tired of debating such minuscule design decisions. There are
more exciting design problems in this world to tackle.”
The announcement of Bowman leaving Google started a lengthy thread on the Interaction Design
Association list about search design and interaction
http://www.ixda.org/discuss.php?post=40237
30