Saturday, April 23, 2011

Microblog #52, "Living With Complexity, Chapters 3-4"

This post includes the full blog for Living With Complexity


3
Summary
This chapter describes how even simple systems can be difficult to operate. Also getting considerable attention is signage.

Discussion
The examples of all the signs were interesting. I am not surprised at all about security experts duplicating passwords.

4
Summary
This chapter discusses signifiers, which are different from affordances - or at least, so says Norman. These are social or environmental cues to the correct actions to take - with "forcing" it as in affordances. The given example is correctly labeling the salt and pepper shakers.

Discussion
The concept is, as noted, awfully similar to affordances. I think once you understand the first, the second follows naturally.

Full Blog
Summary
This book deals with the extent, role, & scope of complexity in everyday life. Themes include that the world itself is complex, resummarizes the mental model, points out that calling for simplification at all costs is itself and oversimplification, describes how simple systems can become complex and how people cope, and explains signifiers.

Discussion
By this time, we have all come to know what to expect from Norman. This book has some good ideas, but they could have been as easily expressed with a description and an example rather than a whole chapter, and there was of course some repetition from his earlier work.

The point he makes that I like most is that we should only be upset about unneeded complexity, rather than all complexity.

Friday, April 22, 2011

Paper Reading #25, "A Code Reuse Interface for Non-Programmer Middle School Students"

http://angel-at-chi.blogspot.com/2011/04/paper-reading-19-tell-me-more-not-just.html
http://csce436-hoffmann.blogspot.com/2011/04/paper-reading-24-using-language.html

A Code Reuse Interface for Non-Programmer Middle School Students

Paul A. Gross, Micah S. Herstand, and Caitlin L. Kelleher, Washington University in St. Louis
Jordana W. Hodges, University of North Carolina

IUI’10, February 7–10, 2010, Hong Kong, China

Summary
This paper describes a tool to assist in code reuse for novice programmers, especially middle schoolers. It is specifically associated with a format called "Looking Glass IDE" that enables the users to create animated stories. This is considered a desirable area of study because it is believed that the middle school stage is a critical point in the process in attracting boys & girls to the computer field.

The project, which was never given a name beyond code reuse interface, functions by allowing users to save a script for an action, say one object running into another and knocking to over, generalize it, and then re-insert it elsewhere or in another program specifying new characters to assume the roles. It includes protections against compilation errors, say ordering a table to run.

The results of the study, which are provided in some detail in the paper, appear to have been reasonably satisfying. Out of 47 subjects, 77% were able and motivated to create a program with more than five lines of code. Also successful was the "social propagation" of code and ideas among the students.

Discussion
This paper did a very good job of presenting the work in an understandable fashion, although whether that was because the authors were good writers or because the material was inherently simpler since it was ultimately intended for middle schoolers is an open question.

The paper is significant because producing more good programmers is always significant.

Given infinite time and resources, the best next work would be to give this to a bunch of students and see if it increased either their interest or ability in coding relative to their peers, measured five or more years after the initiation of the test.

The potential flaw with the whole concept is that the program exists. I am not convinced that the need for novice stages in things like reading and coding is necessary. Rather than learning a cut-and-paste pre-assembled language, perhaps we should start folks out in real programming languages, just like we start 'em out in math with real equations, rather than reassembling other's. Just a thought, I'm no expert.

From the paper.

Tuesday, April 19, 2011

Microblog #51, "Living With Complexity, Chapters 1-2"

1
Summary
This chapter explains that complexity is not essentially, since the world around us is complex (examples are given), but rather the problem is unnecessary complexity.

Discussion
This is another Norman book, apparently more recent than the others. We'll see what he has to say this time.


2
Summary
This chapter re-introduces us to the mental model, rehashes chapter one, explains how a lot of things we think of as simple aren't as simple as we think, and finally points out that the people who are calling for simplicity may be oversimplifying the problem (the irony).

Discussion
How come he gets to cite Wikipedia and I can't?

On a more serious note, there is a lot of rehash in this chapter. The principle difference in this one almost seems to be a repudiation of the simplicity for simplicities sake that was there occasionally in Design of Everyday Things.

Full Blog, "Why We Make Mistakes"

Summary
This book was about the reasons people make mistakes. Each chapter covers a different cause, for example skimming, give detailed examples, back it up with numbers, and attempt to explain it. Some of the causes were obvious, while some were more insightful. Similarly, several of them seemed to be clearly correct while in other cases the author's position was not completely convincing.

Discussion
I enjoyed this book. I think my favorite parts were where he demonstrated his points with challenges (which is the real penny, words to the anthem, etc), although how badly I clobbered the two I took may not have really helped his point all that much.

He avoided the repetition that some earlier assigned works struggled with, while still managing to find points to make that hadn't yet been made. I would consider this the best book assigned so far in this class, and probably second to Mythical Man Month overall (although that one seems to be tailing off at the end).

The hazards of a poor mental model. Source: Calvin & Hobbes, by Bill Watterson, found at site http://freewebs.com/calhobbes/sunset.gif via GIS.

Paper Reading #24, "Outline Wizard"

http://zmhenkel-chi2010.blogspot.com/2011/03/paper-reading-16-performance.html
http://ryankerbow.blogspot.com/2011/04/paper-reading-23.html

Outline Wizard: Presentation Composition and Search

Lawrence Bergman, Jie Lu, Ravi Konuru, Julie MacNaught, Danny Yeh

IBM T.J. Watson Research Center

IUI’10, February 7–10, 2010, Hong Kong, China




Summary
Outline Wizard is a power point plug-in designed to provide hierarchical structure to presentations of existing material. It is built to fill a need the authors perceive in that all current presentation software treats presentations simply as linear collections of slides. The intended benefits are improving both the effectiveness and ease of use of structure, of searching, and of incorporating results into the presentation. Additional features include an algorithm to scan a presentation an extract an outline, and searching based on the outline (either derived or provided) to more easily find content in extant presentations.

Tests indicated that both algorithms were effective, and a user study of the software met with "enthusiastic" results. Five of the six participants believed that the software would be of significant benefit, relative to existing methods; the last was undecided. The most immediate point of further proposed work would be to expand the search algorithm to returns sets of slides than slide as single units.

The user interface, from the paper.



Discussion
This paper was the best written and most accessible of the IUI papers I have been assigned. I hadn't thought of this type of thing beforehand, but it seems like a structure for presentations would be both interesting and useful to have. The biggest flaw in the paper was the small sample size of the user study. Six people really isn't very many. In the future, I would like to see this software tested on a much larger scale and see if the users are as pleased over a long term as the short term.

Saturday, April 16, 2011

Microblog #50, "Why We Make Mistakes, Chapters 12, 13, Conclusion"

12
Summary
This chapter describes constraints and affordences from the perspective of the WWMM authors. The point is the same, the examples different.

Discussion
The tidbit about hospitals and CVNs was interesting, if predictable.


13
Summary
This chapter deals with projection error. That is, people not accurately understanding how a change will effect their happiness.

Discussion
I didn't need this book to tell me that moving to the train wreck known as California is a terrible idea. There's a reason all the sane folks are bailing out (I just wish they'd stop voting for the same maniacs who have run California into the ground after moving to my beautiful Colorado).

Conclusion
Summary
This chapter outlines how you can avoid, mostly by enumerating all the examples of mistakes from the generalizing how they could have been avoided.

Discussion
This chapter didn't really contain any new information, and suffers from both getting a little on the sappy side and also overstating points, as in some previous chapters.

Paper Reading #23, "Facilitating Exploratory Search by Model-Based Navigational Cues"

http://alex-chi.blogspot.com/2011/04/paper-reading-18-dmacs-building.html
http://detentionblockaa32.blogspot.com/2011/04/paper-reading-23-natural-language.html

Facilitating Exploratory Search by Model-Based Navigational Cues

Wai-Tat Fu, Thomas G. Kannampallil, and Ruogu Kan
University of Illinois
Presented at IUI’10, February 7–10, 2010, Hong Kong, China

Summary
The authors of this paper wrote it and built a simulator to test the notion that unstructured social tagging may cause difficulties for searchers. The hypothesis being challenged is based on the notion that casual tagging will eventually become an incoherent mess of tags. The counter hypothesis is that the tagging isn't as random as thought, and will instead follow cohesively from whichever tags are posted earliest. That is, early tagging heavily influences the tagging of later users.

The Semantic Imitation Model was designed to simulate the actions of expert and novice users across a document space assembled for the study. The results of the simulation did seem to indicate that convergence is experienced.

from the paper



Discussion
I have to be honest, I don't think much of this paper. First, the problem statement being challenged is very weak - the suggestion that tagging will go all over the place and become useless is very counter intuitive, and it almost feels like the authors built a strawman to have something to challenge.

I really don't understand why they ran simulations rather than finding some actual users to do the study for them...if you build searcher simulators to model searched, they will be based on your understanding of how searchers - even if you don't intend them to - and will tend towards validating your understanding of the whole system.

It's possible that some of the above criticism is invalid, and I simply misunderstood. In that case, the paper's flaw is rather a failure to communicate effectively. Future work : do it again, with people this time.

In the interest of completeness : significant because due to the prevalence of searching any improved understanding thereof is quite useful.

Tuesday, April 12, 2011

Microblog #49, "Why We Make Mistake, Chapters 10 & 11"

10
Summary
This chapter discusses overconfidence, how it leads to errors, and how it is exploited by businesses.


Discussion
Your calculation of the odds may be off due to overconfidence, but it's telling that all five guys who thought they would shoot badly did. And wasn't this book whining just a chapter or two ago about the collective cowardice of professional football coaches? I think the point he's making here isn't as strong as he thinks it is.

11

Summary
This chapter is all about the perceived extent of and problems with winging it.

Discussion
The examples here (especially the bomb and the ball) are a serious step down from previous chapters. Here, the author is oversimplifying complex situations in order to make his point, and as a result not making it as well as he thinks he is.

Microblog #48, "Media Equation Parts 1, 2, & 3"

This also includes the Full Blog for Media Equation



Machines and Mindlessness: Social Responses to Computers

Clifford Nass - Stanford
Youngme Moon - Harvard
The Society for the Psychological Study of Social Issues, 2000

Computers are Social Actors
Clifford Nass, Johnathan Steuer, and Ellen R, Tauber, Stanford
CHI '94, Boston, Massachusetts.

Can Computer Personalities be Human Personalities?
Clifford Nass, Youngme Moon, BJ Fogg, Byron Reeves, and Chris Dryer, Stanford
CHI '95, Denver, Colorado


1
Summary
This paper describes how individuals react to computers as if they were humans, even though they clearly do not believe this to be the case. The authors speculate on why this so, and appear to favor the theory that scripts simply kick in in response to certain stimuli. That is, the interaction is "mindless".

Discussion
This is an interesting topic. I wonder if the effects ascribed herein are more prevalent among the general public than among computer scientists? On, maybe more likely, it varies by effect. I would guess computer people are less likely to be polite to a bot, but more likely to name their machines.


2
Summary
This paper covers much the same material as the first, in a shorter format with much more precise presentation of results, which were broadly similar. This provides additional evidence that people are polite to computers, treat them as entities in the social sense, and even ascribe them gender without consciously doing so.

Discussion
There isn't a whole lot to say about this paper that I didn't say about the first, although I do approve of the new format.


3
Summary
This paper is the very brief finale to the Media Equation series. It rehashes one of the few points from paper 1 not studied in paper 2 in paper 2's style.

Discussion
See the commentary from the above. All this paper states is that people will treat a computer as a dominant actor if the word choice for the questions reflects that, and the converse.

Full Blog
Summary
This series of papers describes, in detail and with supporting graphs and experiments, the high degree to which humans treat computers as if they were humans. The extent appears to be almost anything that is taken at the unconscious level rather than having to be consciously thought about, where everyone agrees computers are not people. Interactions tested included whether humans are polite towards computers, whether they ascribe gender to computers, and whether rules about what is said to an individuals face rather than to his evaluators still hold.

Discussion
The paper was very interesting because it describes some unexpected interactions between human and machine. The biggest weakness of the paper was that it wandered somewhat, at least in the first, causing trouble with reader engagement.

It would be interesting, as future work, to see if the "separate actors" exercises would have the same results in repeated substituting different (and different looking!) programs on the same machine for different machines.

Clifford Nass, from stanford.edu via GIS

Paper Reading #22, " DocuBrowse"

http://isthishci.blogspot.com/2011/04/paper-reading-19-from-documents-to.html
http://pfrithcsce436.blogspot.com/2011/04/paper-reading-21-supporting-exploratory.html#comments

DocuBrowse: Faceted Searching, Browsing, and Recommendations in an Enterprise Context

Andreas Girgensohn, Francine Chen, and Lynn Wilcox, FX Palo Alto Laboratory
Frank Shipman, TAMU Computer Science
Presented at IUI’10, February 7-10, 2010, Hong Kong, China

Summary
This paper describes DocuBrowse, a system to allow "easy and intuitive" enterprise searches. Enterprise searches are searches for documents inside a given organization, lacking the interlinks that make internet searching in the modern sense so effective.

The biggest advantage to DocuBrowse as a document organization system is that files can be in more than onre directory; instead of having to find the one specific location you need you can come in from any angle. Other features the authors attempted to implement include being able to see an entire tree in one query, retaining structure in results (rather than just a Google-esque list, see image), and a genre detector telling us what type of document it is. The last is accomplished based on estimation from images via a system known as GenIE (Genre Identification and Estimation) and presumably applies to scanned documents, since in the base you can tell a .rtf from a .doc, etc, by the file extension.

In determining relevance of documents to individuals, organizational structure and job class replace their access history. That is, instead of being pointed towards documents they have seen before they pointed towards documents that are relevant to their position or that others with the same or equivalent job titles have accessed.

The authors state their next intended move in this research is testing with some kind of large organization, as they have only conducted in-house tests so far.




Discussion
This paper is significant because it would be difficult to imagine trying to find critical information without access to a modern search engine. Expanding that as widely as possible is an important goal.

The biggest weakness of the paper was a failure to note how well it worked in in-house testing or better explain GenIE. The biggest strength was good diagrams.

The future research proposed seems solid, although they might also consider an auto-keywording system if they don't already have one. Currently the implication seems to be it is all manual.

Thursday, April 7, 2011

Paper Reading #21, "Towards a Reputation-based Model of Social Web Search"

http://jimmymho.blogspot.com/2011/04/paper-reading-21-multimodal-labeling.html
http://vincehci.blogspot.com/2011/04/paper-reading-20-data-centric.html

Towards a Reputation-based Model of Social Web Search

Kevin McNally, Michael P. O’Mahony, Barry Smyth, Maurice Coyle, Peter Brigg
University College Dublin, Ireland
Presented at IUI '10, Feb 7-10 2010, Hong Kong

Summary
This paper is about the HeyStaks system for collaborative usage of search engines such as Google. The authors found a need for such a device due to extensive collaborative usage of such systems even without explicit software support in place.

HeyStaks works on a reputation model, carefully designed to have its incentives keep close to actual "production of usual shared search content". Other users can vote the information provided by any given searcher as useful or not useful, among other algorithms. This was found to be reasonably successful in preventing 'gaming' of the system to produce high reputation with producing content.

One interesting anomaly is that the users seemed to break down into searcher/follower clusters even without this being an explicitly coded or even intended outcome of the system. The results of the user study included in the paper (see image below) sort of reflect this, with five producers and twenty one consumers.

From the paper.

The authors currently have a 500 user beta underway, but no results from it were included in the paper.


Discussion
The idea here is interesting - certainly it is undeniable that a lot of collaborative searching happens, so anything that helps in that area would be useful - but I'm not sure about it's broad applicability. Since the users already have two terminals available to them, I don't see where there is a big prospect for improvement. After all, the example of current collaboration they gave was a user suggesting search terms over another's shoulder.

Let me put my concern this way: When searching the desert, a two seat plane is better than a one seat plane. But is it as good as or better than two one seat planes? I doubt it.

The public beta is a good next step, and I'd kind've like to see a Wizard of Oz type study mocking up the planned finished product.

The other thing they could have improved on was a clearer explanation of how the HeyStak worked. Not the mechanics of the algorithm, but how it interacted with the users. How were searches sent back and forth, for instance? The information may be there, but I can't seem to sift it out.

Microblog #47, "Why We Make Mistakes, Chapters 8 and 9"

8
Summary
This chapter deals with the internal organization of information, using misremembered facts to advocate the notion that we store information in orderly constellations, even when the information is disorderly.

Discussion
The results of the Star Spangled Banner experiment, for me. Didn't look up anything.

Oh say can you see by the dawn's early light
What so proudly we hailed at the twilight's last gleaming
Whose broad stripes and bright stars through the perilous fight
O'er the ramparts we watched were so gallantly streaming?
And the rockets red glare the bombs bursting in air
Gave proof through the night that our flag was still there
Oh say does that star spangled banner yet wave
O'er the land of the free and the home of the brave 


On the shore dimly seen through the mists of the deep
Where the foes haughty host in dread silence reposes
What is that which the breeze, o'er the towering steep
As it fitfully blows half conceals, half discloses?
Now it catches the gleam of the morning's first beam
In full glory reflected now shines on the stream
'Tis the star spangled banner O long may it wave
O'er the land of the free and the home of the brave


And where is the foe who so vauntingly swore
That the havoc of war and the battle's confusion
A home and a country would leave us no more?
Their blood has washed out their foul footsteps pollution
And no refuge could save the hireling and slave
From the terror of flight, or the gloom of the grave
And the star spangled banner in triumph doth wave
O'er the land of the free and the home of the brave


Thus shall it be ever when free men shall stand
Between their loved homes and the wars desolation
Blessed with vict'ry and peace may the Heaven rescued land
Praise the Power which hath made and preserved it a nation
And conquer we must for our cause it is just
Let this be our cry "In God is our trust"
And the star spangled banner forever shall wave
O'er the land of the free and the home of the brave


Making allowances for 'and' vs '&' and 'watched' vs 'watch'd', etc I got all 82 words. For the record, the book has an error on line 5 of the lyrics - "bomb" should be the plural "bombs".

I don't think this is the best example they could have picked. The Anthem is an old enough song that there are multiple correct versions (cry vs motto, verse 4 line 6), so there are some cases were it isn't really wrong to get lyrics different from the standard being used as a control here.


9

Summary
This chapter discusses the differences in the performances of men and women in various studies and statistics, then relates it to memory.

Discussion
After getting off to a good start the book comes crashing back to earth in this chapter. The two initial examples (Traffic Tickets & Saddam WMDs) are both blatantly flawed (They didn't even bring up the obvious first explanation to the male-female ticket disconnect, namely that cops will be less likely to give a ticket to a woman than a man they pulled over for doing the same thing, for example). Still, some of their point may have merit.

Tuesday, April 5, 2011

Microblog #46, "Why We Make Mistakes, Chapters 6&7"

6
Summary
This chapter covers how frames of mind influence decisions, especially potential losses vs. potential gains.

Discussion
I am flatly insulted by the insinuating that The Piano is somehow a "better" movie than a classic like Clear and Present Danger. It's not quite Hunt For Red October, but come on!

The phenomena with NFL coaches is well documented, but I strongly doubt that it has anything to do with not knowing the odds. Among people who know anything about the situation (as opposed to this Hallinan character, who saw some numbers that might possibly be construed as supporting some point he wanted to make) many explanations have been offered for this behavior, and "not understanding the risks" isn't one of them.

The first, most obvious, and most often cited theory is that the coaches are playing the odds correctly...just not the team's odds. The way fans and the media perceive games, if a coach goes for it and fails it is his fault, but if he kicks instead and something goes wrong it's the kicker that catches heat. Thus, a conservative coach protects his own job security at the expense of the kicker and the team. A form of yellow-bellied moral cowardice to be sure, but not irrational in the way Hallinan implies. (Rather, he's close but has the locus on the coach when it should be on the less-football smart individuals who still have influence. This includes the ticket buying public).

Second theory says that the current decision making paradigm was optimal for the 60s and 70s when these coaches learned the game, and they just haven't caught up with the times. The third is that coaches worry that they concede "momentum" when they go for it and fail, but not when they punt. It's irrational no matter what (unless case 1 is assumed) but he has the particulars wrong.

7
Summary
This brief chapter covers how people put together information from the context, and so experts often miss errors that novices catch because they don't know the context well enough to make inferences.

Discussion
Good examples to illustrate the point, although are we sure the suicide isn't a urban legend?

Full Blog, "Things That Make Us Smart"

Things That Make Us Smart
Donald Norman
Perseus Books, Cambridge, MA, 1993

Summary
This book discusses the way people learn, and relate to their environment in terms of information. Themes include different types of action (reflection vs. experience), different types of learning (accretion, tuning, restructuring), the ways the human brain handles information, and the ways devices can be made to maximize the potential thereof.


Also discussed are good and bad methods for the above, with examples, logic puzzles to illustrate points, and another explanation of how affordances can be used to encode information about how a device operates.


Discussion
Although it started weak, I think this is actually my favorite of Norman's books. The 3rd and 4th chapters were both very strong, and I feel smarter than when I began reading the book. That's the interest. The biggest weakness is that it still has some issues with repetitiveness, especially when one has already read his previous books.

Microblog #45, "Things That Make Us Smart, Chapters 3-4"

4
Summary
This chapter covers reflection; that is, external aids to thought. It is fairly detailed, with an emphasis out contradicting intuitive wisdom, or what seems like such today.

Discussion
The strongest chapter of this book so far, making a bunch of good points. I had difficulty with tic tac toe example (even though I knew that was what it was, I couldn't remember which value went to which square). I did, however, find the paragraph more helpful than the visual on page 61.

5
Summary
This chapter covers much the same material as the previous, only from the other end. That is, he discusses how artifacts can be made optimal for humans, rather than the way humans interact with artifacts.

Discussion
This was another strong chapter, although I've heard all the affordance stuff before. I wonder if the televisions of 1993 worked differently than today; my television has a perceptual difference between "off" black and "no picture" black.

Paper Reading #20, "Lowering the Barriers to Website Testing with CoTester"

http://csce436-hoffmann.blogspot.com/2011/04/paper-reading-19-vocabulary-navigation.html
http://aaronkirkes-chi.blogspot.com/2011/04/paper-reading-19-personalized-news.html

Lowering the Barriers to Website Testing with CoTester
Jalal Mahmud and Tessa Lau, IBM
IUI 2010, 7-10 Feb '10, Hong Kong

Summary
CoTester is, in the author's words, "a lightweight web testing tool which can help testers easily create and maintain test script". The intention, if I understand correctly, is to have a tool that can be used for easy, automated testing of website functions. The authors extended an existing, easy to learn scripting language (CoScripter) for the project, with the goal of creating a script testing tool that did not require knowledge of Java/Visual Basic to utilize.

I'm afraid that the implementation, which they went into a quite detailed explanation of, was a bit beyond my level and I do not feel I can relate it properly. Interested readers should refer to the paper.

The results were quite promising. The script did an excellent job of identifying problems, exceeding the comparison algorithm's success rate by 14% (91% to 77%), and using cosine similarity scores over straight up equality checking to determine which class instructions belong in was also quite successful.


Discussion
The item that jumped out at me here as questionable - and I may be a little off-base here - was that I'm not sure I want my debugging script to be written by someone without at least a minimal programming background. I mean, Java and Visual Basic aren't exactly the most difficult languages in the world. Still I'm sure there's some application for this and they seem to be getting good results.

A good automated testing system for anything development related could of course be of great benefit to any programmer, just ask the Extreme Programming guys.

I agree with the authors that a much-expanded user study would be a good next move, but I'm definitely not qualified to tell them what to do next in the technical sense.

The most accessible illustration from the paper. Pictures were not a strong suit.

Sunday, April 3, 2011

Ethnography Results, Week 8

I went back to the group that meets Saturday night at McDonalds. It was post-convention tabletop games night, so there wasn't any D&D. Nothing else Earth-shattering to report.

Microblog #44, "Why we Make Mistakes, Chapters 4&5"

4
Summary
This chapter is about how hindsight isn't as clear as we think it is. It shows several statistical studies of people misremembering to put themselves in a better light.

Discussion
This is...interesting. I'm familiar with the sports gambler phenomena (fantasy football, etc) but I find it tends to be far less prevalent among statistically minded people (who generally, are also the best at predictions). I wonder if this is generally true.


5
Summary
This chapter talks about how multitasking isn't really, with an emphasis on in-car distractions.

Discussion
Visual distractions are the worst problem for drivers; at some point we'll have voice systems good enough that eyes can be kept on the road while tasks are being carried out. I think the calls for more regulation are a little overblown; at some point, writing unenforcible laws just make you look silly (leaving aside entirely liberty-vs-security/safety concerns).

Microblog #43, "Things that Make us Smart, Chapter 1&2"

1
Summary
This chapter opens Dr. Norman's book Things that Make Us Smart. The two principal themes are over-entertainment at the expense of education (for example, TV news vs. newspapers, more flash but generally less content) and machines working more for machines than for people (user unfriendly interfaces failings being entirely brushed off on people).

Discussion
If it sounds like you've heard this before, it's because you have. This chapter has a little bit more of a rambling quality than Norman's previous work, we'll see if that continues.

2
Summary
This chapter expands on the first, then describes the three kinds of learning (accretion, tuning, restructuring) and explains the phenomena of optimal flow.

Discussion
I must admit that I was not in "optimal flow" as I was reading this. I did, however, note that he's still trying to reverse engineer things from the playstation (pg. 22).

Full Blog, "Coming of Age in Samoa"

Coming of Age in Samoa
Margaret Mead
William Morrow & Company, USA, 1928


Summary
This book is an ethnographic study of youth, especially girls, conducted in 1920s Samoa by Dr. Margaret Mead in the hopes of finding a population of that age range that wasn't caught up in the "noise" of Western society. She spends fourteen chapters discussing the results of her study, then follows it up with a statistical appendix.

Samoa is described as a very laid back culture, with the worst of the previous primitive culture having been eliminated by Western contact without having having yet acquired the hectic state of the modern West. Dr. Mead's report is summarized in chapters 13 and 14, with 13 being more a real summary and 14 being a soapbox speech on its applicability to the US of the 1920s.


Discussion
The significance of this book hardly needs to be expounded upon, as it is considered the classic of the genre. The two things I really didn't like about were 1. there was far more information in some sections than I wanted, as I'm sure can be recalled from my commentary on the appropriate chapters, and 2. I strongly disagree with the final chapter. Dr. Mead's ideas on child rearing are anathema to me. For future work, at this point I could just pull up some materials on modern Samoa.

Saturday, April 2, 2011

Paper Reading #19, "WildThumb"

http://chiblog.sjmorrow.com/2011/03/paper-reading-19-tell-me-more-not-just.html
http://csce436spring2011.blogspot.com/2011/03/paper-reading-19-local-danger-warnings.html

WildThumb: A Web Browser Supporting Efficient Task Management on Wide Displays
Shenwei Liu, Cornell University
Keishi Tajima, Kyoto University

IUI’10, February 7–10, 2010, Hong Kong, China

Summary
This paper describes a new system designed by the authors to allowing easier tab switching than the currently available systems for web browsers on wide screen monitors. It uses the extra space to display "augmented thumbnails" instead of traditional tabs, making the pages more visible and easier to click.

The thumbnails themselves consist of an image of the top of the page, with the site logo overlaying the upper left and the most prominent image on the page overlaying the lower right. The basic idea is clearly illustrated in the image below, taken from the paper.



A 9-user study conducted with experienced web browser users led to the conclusion, both from timing operations and from questionnaires issued to the subjects - that the system provided for a minor increase in switching speed.

Discussion
This idea was very interesting (which I think will be illustrated by the length of this discussion section!) if, in my view, somewhat flawed. The idea of making improvements to the tab system is of course broadly applicable and would be quite useful to anyone.

The concerns I have are as follows. First, look at the above screenshot. While the contents of the unopened tabs are more clear, the trade off is that they are also quite distracting, drawing the eye away from the primary focus. Second, the excess space being utilized here is going to vanish as more and more websites allowing widescreen browsing, meaning that you will be paying an increased price in terms of readability to get these augmented sidebar thumbnails. I am also somewhat concerned about the auto-page grouping algorithm, as I prefer to maintain control over the positioning of my tabs myself and find that the widely available click and drag functionality is quite adequate for this. This concern could be allayed by simply allow that algorithm to be toggled off. I must also note that I have caught the first grammatical error I can recall in one of these papers: in the 2nd sentence of the introduction "are" should be "is".

On a positive note, the augmented thumbnails do live to their billing, and could improve many different pages/functions that use thumbnails. The Chrome homepage, for instance.

In the future, two functions I want to see are the ability to view two different tabs from the same browser at one time (presumably, each taking up half the screen by default), and the ability to have a function otherwise similar to "favoriting" save and open multiple tabs at once.