Saturday, April 23, 2011

Microblog #52, "Living With Complexity, Chapters 3-4"

This post includes the full blog for Living With Complexity


3
Summary
This chapter describes how even simple systems can be difficult to operate. Also getting considerable attention is signage.

Discussion
The examples of all the signs were interesting. I am not surprised at all about security experts duplicating passwords.

4
Summary
This chapter discusses signifiers, which are different from affordances - or at least, so says Norman. These are social or environmental cues to the correct actions to take - with "forcing" it as in affordances. The given example is correctly labeling the salt and pepper shakers.

Discussion
The concept is, as noted, awfully similar to affordances. I think once you understand the first, the second follows naturally.

Full Blog
Summary
This book deals with the extent, role, & scope of complexity in everyday life. Themes include that the world itself is complex, resummarizes the mental model, points out that calling for simplification at all costs is itself and oversimplification, describes how simple systems can become complex and how people cope, and explains signifiers.

Discussion
By this time, we have all come to know what to expect from Norman. This book has some good ideas, but they could have been as easily expressed with a description and an example rather than a whole chapter, and there was of course some repetition from his earlier work.

The point he makes that I like most is that we should only be upset about unneeded complexity, rather than all complexity.

Friday, April 22, 2011

Paper Reading #25, "A Code Reuse Interface for Non-Programmer Middle School Students"

http://angel-at-chi.blogspot.com/2011/04/paper-reading-19-tell-me-more-not-just.html
http://csce436-hoffmann.blogspot.com/2011/04/paper-reading-24-using-language.html

A Code Reuse Interface for Non-Programmer Middle School Students

Paul A. Gross, Micah S. Herstand, and Caitlin L. Kelleher, Washington University in St. Louis
Jordana W. Hodges, University of North Carolina

IUI’10, February 7–10, 2010, Hong Kong, China

Summary
This paper describes a tool to assist in code reuse for novice programmers, especially middle schoolers. It is specifically associated with a format called "Looking Glass IDE" that enables the users to create animated stories. This is considered a desirable area of study because it is believed that the middle school stage is a critical point in the process in attracting boys & girls to the computer field.

The project, which was never given a name beyond code reuse interface, functions by allowing users to save a script for an action, say one object running into another and knocking to over, generalize it, and then re-insert it elsewhere or in another program specifying new characters to assume the roles. It includes protections against compilation errors, say ordering a table to run.

The results of the study, which are provided in some detail in the paper, appear to have been reasonably satisfying. Out of 47 subjects, 77% were able and motivated to create a program with more than five lines of code. Also successful was the "social propagation" of code and ideas among the students.

Discussion
This paper did a very good job of presenting the work in an understandable fashion, although whether that was because the authors were good writers or because the material was inherently simpler since it was ultimately intended for middle schoolers is an open question.

The paper is significant because producing more good programmers is always significant.

Given infinite time and resources, the best next work would be to give this to a bunch of students and see if it increased either their interest or ability in coding relative to their peers, measured five or more years after the initiation of the test.

The potential flaw with the whole concept is that the program exists. I am not convinced that the need for novice stages in things like reading and coding is necessary. Rather than learning a cut-and-paste pre-assembled language, perhaps we should start folks out in real programming languages, just like we start 'em out in math with real equations, rather than reassembling other's. Just a thought, I'm no expert.

From the paper.

Tuesday, April 19, 2011

Microblog #51, "Living With Complexity, Chapters 1-2"

1
Summary
This chapter explains that complexity is not essentially, since the world around us is complex (examples are given), but rather the problem is unnecessary complexity.

Discussion
This is another Norman book, apparently more recent than the others. We'll see what he has to say this time.


2
Summary
This chapter re-introduces us to the mental model, rehashes chapter one, explains how a lot of things we think of as simple aren't as simple as we think, and finally points out that the people who are calling for simplicity may be oversimplifying the problem (the irony).

Discussion
How come he gets to cite Wikipedia and I can't?

On a more serious note, there is a lot of rehash in this chapter. The principle difference in this one almost seems to be a repudiation of the simplicity for simplicities sake that was there occasionally in Design of Everyday Things.

Full Blog, "Why We Make Mistakes"

Summary
This book was about the reasons people make mistakes. Each chapter covers a different cause, for example skimming, give detailed examples, back it up with numbers, and attempt to explain it. Some of the causes were obvious, while some were more insightful. Similarly, several of them seemed to be clearly correct while in other cases the author's position was not completely convincing.

Discussion
I enjoyed this book. I think my favorite parts were where he demonstrated his points with challenges (which is the real penny, words to the anthem, etc), although how badly I clobbered the two I took may not have really helped his point all that much.

He avoided the repetition that some earlier assigned works struggled with, while still managing to find points to make that hadn't yet been made. I would consider this the best book assigned so far in this class, and probably second to Mythical Man Month overall (although that one seems to be tailing off at the end).

The hazards of a poor mental model. Source: Calvin & Hobbes, by Bill Watterson, found at site http://freewebs.com/calhobbes/sunset.gif via GIS.

Paper Reading #24, "Outline Wizard"

http://zmhenkel-chi2010.blogspot.com/2011/03/paper-reading-16-performance.html
http://ryankerbow.blogspot.com/2011/04/paper-reading-23.html

Outline Wizard: Presentation Composition and Search

Lawrence Bergman, Jie Lu, Ravi Konuru, Julie MacNaught, Danny Yeh

IBM T.J. Watson Research Center

IUI’10, February 7–10, 2010, Hong Kong, China




Summary
Outline Wizard is a power point plug-in designed to provide hierarchical structure to presentations of existing material. It is built to fill a need the authors perceive in that all current presentation software treats presentations simply as linear collections of slides. The intended benefits are improving both the effectiveness and ease of use of structure, of searching, and of incorporating results into the presentation. Additional features include an algorithm to scan a presentation an extract an outline, and searching based on the outline (either derived or provided) to more easily find content in extant presentations.

Tests indicated that both algorithms were effective, and a user study of the software met with "enthusiastic" results. Five of the six participants believed that the software would be of significant benefit, relative to existing methods; the last was undecided. The most immediate point of further proposed work would be to expand the search algorithm to returns sets of slides than slide as single units.

The user interface, from the paper.



Discussion
This paper was the best written and most accessible of the IUI papers I have been assigned. I hadn't thought of this type of thing beforehand, but it seems like a structure for presentations would be both interesting and useful to have. The biggest flaw in the paper was the small sample size of the user study. Six people really isn't very many. In the future, I would like to see this software tested on a much larger scale and see if the users are as pleased over a long term as the short term.

Saturday, April 16, 2011

Microblog #50, "Why We Make Mistakes, Chapters 12, 13, Conclusion"

12
Summary
This chapter describes constraints and affordences from the perspective of the WWMM authors. The point is the same, the examples different.

Discussion
The tidbit about hospitals and CVNs was interesting, if predictable.


13
Summary
This chapter deals with projection error. That is, people not accurately understanding how a change will effect their happiness.

Discussion
I didn't need this book to tell me that moving to the train wreck known as California is a terrible idea. There's a reason all the sane folks are bailing out (I just wish they'd stop voting for the same maniacs who have run California into the ground after moving to my beautiful Colorado).

Conclusion
Summary
This chapter outlines how you can avoid, mostly by enumerating all the examples of mistakes from the generalizing how they could have been avoided.

Discussion
This chapter didn't really contain any new information, and suffers from both getting a little on the sappy side and also overstating points, as in some previous chapters.

Paper Reading #23, "Facilitating Exploratory Search by Model-Based Navigational Cues"

http://alex-chi.blogspot.com/2011/04/paper-reading-18-dmacs-building.html
http://detentionblockaa32.blogspot.com/2011/04/paper-reading-23-natural-language.html

Facilitating Exploratory Search by Model-Based Navigational Cues

Wai-Tat Fu, Thomas G. Kannampallil, and Ruogu Kan
University of Illinois
Presented at IUI’10, February 7–10, 2010, Hong Kong, China

Summary
The authors of this paper wrote it and built a simulator to test the notion that unstructured social tagging may cause difficulties for searchers. The hypothesis being challenged is based on the notion that casual tagging will eventually become an incoherent mess of tags. The counter hypothesis is that the tagging isn't as random as thought, and will instead follow cohesively from whichever tags are posted earliest. That is, early tagging heavily influences the tagging of later users.

The Semantic Imitation Model was designed to simulate the actions of expert and novice users across a document space assembled for the study. The results of the simulation did seem to indicate that convergence is experienced.

from the paper



Discussion
I have to be honest, I don't think much of this paper. First, the problem statement being challenged is very weak - the suggestion that tagging will go all over the place and become useless is very counter intuitive, and it almost feels like the authors built a strawman to have something to challenge.

I really don't understand why they ran simulations rather than finding some actual users to do the study for them...if you build searcher simulators to model searched, they will be based on your understanding of how searchers - even if you don't intend them to - and will tend towards validating your understanding of the whole system.

It's possible that some of the above criticism is invalid, and I simply misunderstood. In that case, the paper's flaw is rather a failure to communicate effectively. Future work : do it again, with people this time.

In the interest of completeness : significant because due to the prevalence of searching any improved understanding thereof is quite useful.

Tuesday, April 12, 2011

Microblog #49, "Why We Make Mistake, Chapters 10 & 11"

10
Summary
This chapter discusses overconfidence, how it leads to errors, and how it is exploited by businesses.


Discussion
Your calculation of the odds may be off due to overconfidence, but it's telling that all five guys who thought they would shoot badly did. And wasn't this book whining just a chapter or two ago about the collective cowardice of professional football coaches? I think the point he's making here isn't as strong as he thinks it is.

11

Summary
This chapter is all about the perceived extent of and problems with winging it.

Discussion
The examples here (especially the bomb and the ball) are a serious step down from previous chapters. Here, the author is oversimplifying complex situations in order to make his point, and as a result not making it as well as he thinks he is.

Microblog #48, "Media Equation Parts 1, 2, & 3"

This also includes the Full Blog for Media Equation



Machines and Mindlessness: Social Responses to Computers

Clifford Nass - Stanford
Youngme Moon - Harvard
The Society for the Psychological Study of Social Issues, 2000

Computers are Social Actors
Clifford Nass, Johnathan Steuer, and Ellen R, Tauber, Stanford
CHI '94, Boston, Massachusetts.

Can Computer Personalities be Human Personalities?
Clifford Nass, Youngme Moon, BJ Fogg, Byron Reeves, and Chris Dryer, Stanford
CHI '95, Denver, Colorado


1
Summary
This paper describes how individuals react to computers as if they were humans, even though they clearly do not believe this to be the case. The authors speculate on why this so, and appear to favor the theory that scripts simply kick in in response to certain stimuli. That is, the interaction is "mindless".

Discussion
This is an interesting topic. I wonder if the effects ascribed herein are more prevalent among the general public than among computer scientists? On, maybe more likely, it varies by effect. I would guess computer people are less likely to be polite to a bot, but more likely to name their machines.


2
Summary
This paper covers much the same material as the first, in a shorter format with much more precise presentation of results, which were broadly similar. This provides additional evidence that people are polite to computers, treat them as entities in the social sense, and even ascribe them gender without consciously doing so.

Discussion
There isn't a whole lot to say about this paper that I didn't say about the first, although I do approve of the new format.


3
Summary
This paper is the very brief finale to the Media Equation series. It rehashes one of the few points from paper 1 not studied in paper 2 in paper 2's style.

Discussion
See the commentary from the above. All this paper states is that people will treat a computer as a dominant actor if the word choice for the questions reflects that, and the converse.

Full Blog
Summary
This series of papers describes, in detail and with supporting graphs and experiments, the high degree to which humans treat computers as if they were humans. The extent appears to be almost anything that is taken at the unconscious level rather than having to be consciously thought about, where everyone agrees computers are not people. Interactions tested included whether humans are polite towards computers, whether they ascribe gender to computers, and whether rules about what is said to an individuals face rather than to his evaluators still hold.

Discussion
The paper was very interesting because it describes some unexpected interactions between human and machine. The biggest weakness of the paper was that it wandered somewhat, at least in the first, causing trouble with reader engagement.

It would be interesting, as future work, to see if the "separate actors" exercises would have the same results in repeated substituting different (and different looking!) programs on the same machine for different machines.

Clifford Nass, from stanford.edu via GIS

Paper Reading #22, " DocuBrowse"

http://isthishci.blogspot.com/2011/04/paper-reading-19-from-documents-to.html
http://pfrithcsce436.blogspot.com/2011/04/paper-reading-21-supporting-exploratory.html#comments

DocuBrowse: Faceted Searching, Browsing, and Recommendations in an Enterprise Context

Andreas Girgensohn, Francine Chen, and Lynn Wilcox, FX Palo Alto Laboratory
Frank Shipman, TAMU Computer Science
Presented at IUI’10, February 7-10, 2010, Hong Kong, China

Summary
This paper describes DocuBrowse, a system to allow "easy and intuitive" enterprise searches. Enterprise searches are searches for documents inside a given organization, lacking the interlinks that make internet searching in the modern sense so effective.

The biggest advantage to DocuBrowse as a document organization system is that files can be in more than onre directory; instead of having to find the one specific location you need you can come in from any angle. Other features the authors attempted to implement include being able to see an entire tree in one query, retaining structure in results (rather than just a Google-esque list, see image), and a genre detector telling us what type of document it is. The last is accomplished based on estimation from images via a system known as GenIE (Genre Identification and Estimation) and presumably applies to scanned documents, since in the base you can tell a .rtf from a .doc, etc, by the file extension.

In determining relevance of documents to individuals, organizational structure and job class replace their access history. That is, instead of being pointed towards documents they have seen before they pointed towards documents that are relevant to their position or that others with the same or equivalent job titles have accessed.

The authors state their next intended move in this research is testing with some kind of large organization, as they have only conducted in-house tests so far.




Discussion
This paper is significant because it would be difficult to imagine trying to find critical information without access to a modern search engine. Expanding that as widely as possible is an important goal.

The biggest weakness of the paper was a failure to note how well it worked in in-house testing or better explain GenIE. The biggest strength was good diagrams.

The future research proposed seems solid, although they might also consider an auto-keywording system if they don't already have one. Currently the implication seems to be it is all manual.

Thursday, April 7, 2011

Paper Reading #21, "Towards a Reputation-based Model of Social Web Search"

http://jimmymho.blogspot.com/2011/04/paper-reading-21-multimodal-labeling.html
http://vincehci.blogspot.com/2011/04/paper-reading-20-data-centric.html

Towards a Reputation-based Model of Social Web Search

Kevin McNally, Michael P. O’Mahony, Barry Smyth, Maurice Coyle, Peter Brigg
University College Dublin, Ireland
Presented at IUI '10, Feb 7-10 2010, Hong Kong

Summary
This paper is about the HeyStaks system for collaborative usage of search engines such as Google. The authors found a need for such a device due to extensive collaborative usage of such systems even without explicit software support in place.

HeyStaks works on a reputation model, carefully designed to have its incentives keep close to actual "production of usual shared search content". Other users can vote the information provided by any given searcher as useful or not useful, among other algorithms. This was found to be reasonably successful in preventing 'gaming' of the system to produce high reputation with producing content.

One interesting anomaly is that the users seemed to break down into searcher/follower clusters even without this being an explicitly coded or even intended outcome of the system. The results of the user study included in the paper (see image below) sort of reflect this, with five producers and twenty one consumers.

From the paper.

The authors currently have a 500 user beta underway, but no results from it were included in the paper.


Discussion
The idea here is interesting - certainly it is undeniable that a lot of collaborative searching happens, so anything that helps in that area would be useful - but I'm not sure about it's broad applicability. Since the users already have two terminals available to them, I don't see where there is a big prospect for improvement. After all, the example of current collaboration they gave was a user suggesting search terms over another's shoulder.

Let me put my concern this way: When searching the desert, a two seat plane is better than a one seat plane. But is it as good as or better than two one seat planes? I doubt it.

The public beta is a good next step, and I'd kind've like to see a Wizard of Oz type study mocking up the planned finished product.

The other thing they could have improved on was a clearer explanation of how the HeyStak worked. Not the mechanics of the algorithm, but how it interacted with the users. How were searches sent back and forth, for instance? The information may be there, but I can't seem to sift it out.

Microblog #47, "Why We Make Mistakes, Chapters 8 and 9"

8
Summary
This chapter deals with the internal organization of information, using misremembered facts to advocate the notion that we store information in orderly constellations, even when the information is disorderly.

Discussion
The results of the Star Spangled Banner experiment, for me. Didn't look up anything.

Oh say can you see by the dawn's early light
What so proudly we hailed at the twilight's last gleaming
Whose broad stripes and bright stars through the perilous fight
O'er the ramparts we watched were so gallantly streaming?
And the rockets red glare the bombs bursting in air
Gave proof through the night that our flag was still there
Oh say does that star spangled banner yet wave
O'er the land of the free and the home of the brave 


On the shore dimly seen through the mists of the deep
Where the foes haughty host in dread silence reposes
What is that which the breeze, o'er the towering steep
As it fitfully blows half conceals, half discloses?
Now it catches the gleam of the morning's first beam
In full glory reflected now shines on the stream
'Tis the star spangled banner O long may it wave
O'er the land of the free and the home of the brave


And where is the foe who so vauntingly swore
That the havoc of war and the battle's confusion
A home and a country would leave us no more?
Their blood has washed out their foul footsteps pollution
And no refuge could save the hireling and slave
From the terror of flight, or the gloom of the grave
And the star spangled banner in triumph doth wave
O'er the land of the free and the home of the brave


Thus shall it be ever when free men shall stand
Between their loved homes and the wars desolation
Blessed with vict'ry and peace may the Heaven rescued land
Praise the Power which hath made and preserved it a nation
And conquer we must for our cause it is just
Let this be our cry "In God is our trust"
And the star spangled banner forever shall wave
O'er the land of the free and the home of the brave


Making allowances for 'and' vs '&' and 'watched' vs 'watch'd', etc I got all 82 words. For the record, the book has an error on line 5 of the lyrics - "bomb" should be the plural "bombs".

I don't think this is the best example they could have picked. The Anthem is an old enough song that there are multiple correct versions (cry vs motto, verse 4 line 6), so there are some cases were it isn't really wrong to get lyrics different from the standard being used as a control here.


9

Summary
This chapter discusses the differences in the performances of men and women in various studies and statistics, then relates it to memory.

Discussion
After getting off to a good start the book comes crashing back to earth in this chapter. The two initial examples (Traffic Tickets & Saddam WMDs) are both blatantly flawed (They didn't even bring up the obvious first explanation to the male-female ticket disconnect, namely that cops will be less likely to give a ticket to a woman than a man they pulled over for doing the same thing, for example). Still, some of their point may have merit.

Tuesday, April 5, 2011

Microblog #46, "Why We Make Mistakes, Chapters 6&7"

6
Summary
This chapter covers how frames of mind influence decisions, especially potential losses vs. potential gains.

Discussion
I am flatly insulted by the insinuating that The Piano is somehow a "better" movie than a classic like Clear and Present Danger. It's not quite Hunt For Red October, but come on!

The phenomena with NFL coaches is well documented, but I strongly doubt that it has anything to do with not knowing the odds. Among people who know anything about the situation (as opposed to this Hallinan character, who saw some numbers that might possibly be construed as supporting some point he wanted to make) many explanations have been offered for this behavior, and "not understanding the risks" isn't one of them.

The first, most obvious, and most often cited theory is that the coaches are playing the odds correctly...just not the team's odds. The way fans and the media perceive games, if a coach goes for it and fails it is his fault, but if he kicks instead and something goes wrong it's the kicker that catches heat. Thus, a conservative coach protects his own job security at the expense of the kicker and the team. A form of yellow-bellied moral cowardice to be sure, but not irrational in the way Hallinan implies. (Rather, he's close but has the locus on the coach when it should be on the less-football smart individuals who still have influence. This includes the ticket buying public).

Second theory says that the current decision making paradigm was optimal for the 60s and 70s when these coaches learned the game, and they just haven't caught up with the times. The third is that coaches worry that they concede "momentum" when they go for it and fail, but not when they punt. It's irrational no matter what (unless case 1 is assumed) but he has the particulars wrong.

7
Summary
This brief chapter covers how people put together information from the context, and so experts often miss errors that novices catch because they don't know the context well enough to make inferences.

Discussion
Good examples to illustrate the point, although are we sure the suicide isn't a urban legend?

Full Blog, "Things That Make Us Smart"

Things That Make Us Smart
Donald Norman
Perseus Books, Cambridge, MA, 1993

Summary
This book discusses the way people learn, and relate to their environment in terms of information. Themes include different types of action (reflection vs. experience), different types of learning (accretion, tuning, restructuring), the ways the human brain handles information, and the ways devices can be made to maximize the potential thereof.


Also discussed are good and bad methods for the above, with examples, logic puzzles to illustrate points, and another explanation of how affordances can be used to encode information about how a device operates.


Discussion
Although it started weak, I think this is actually my favorite of Norman's books. The 3rd and 4th chapters were both very strong, and I feel smarter than when I began reading the book. That's the interest. The biggest weakness is that it still has some issues with repetitiveness, especially when one has already read his previous books.

Microblog #45, "Things That Make Us Smart, Chapters 3-4"

4
Summary
This chapter covers reflection; that is, external aids to thought. It is fairly detailed, with an emphasis out contradicting intuitive wisdom, or what seems like such today.

Discussion
The strongest chapter of this book so far, making a bunch of good points. I had difficulty with tic tac toe example (even though I knew that was what it was, I couldn't remember which value went to which square). I did, however, find the paragraph more helpful than the visual on page 61.

5
Summary
This chapter covers much the same material as the previous, only from the other end. That is, he discusses how artifacts can be made optimal for humans, rather than the way humans interact with artifacts.

Discussion
This was another strong chapter, although I've heard all the affordance stuff before. I wonder if the televisions of 1993 worked differently than today; my television has a perceptual difference between "off" black and "no picture" black.

Paper Reading #20, "Lowering the Barriers to Website Testing with CoTester"

http://csce436-hoffmann.blogspot.com/2011/04/paper-reading-19-vocabulary-navigation.html
http://aaronkirkes-chi.blogspot.com/2011/04/paper-reading-19-personalized-news.html

Lowering the Barriers to Website Testing with CoTester
Jalal Mahmud and Tessa Lau, IBM
IUI 2010, 7-10 Feb '10, Hong Kong

Summary
CoTester is, in the author's words, "a lightweight web testing tool which can help testers easily create and maintain test script". The intention, if I understand correctly, is to have a tool that can be used for easy, automated testing of website functions. The authors extended an existing, easy to learn scripting language (CoScripter) for the project, with the goal of creating a script testing tool that did not require knowledge of Java/Visual Basic to utilize.

I'm afraid that the implementation, which they went into a quite detailed explanation of, was a bit beyond my level and I do not feel I can relate it properly. Interested readers should refer to the paper.

The results were quite promising. The script did an excellent job of identifying problems, exceeding the comparison algorithm's success rate by 14% (91% to 77%), and using cosine similarity scores over straight up equality checking to determine which class instructions belong in was also quite successful.


Discussion
The item that jumped out at me here as questionable - and I may be a little off-base here - was that I'm not sure I want my debugging script to be written by someone without at least a minimal programming background. I mean, Java and Visual Basic aren't exactly the most difficult languages in the world. Still I'm sure there's some application for this and they seem to be getting good results.

A good automated testing system for anything development related could of course be of great benefit to any programmer, just ask the Extreme Programming guys.

I agree with the authors that a much-expanded user study would be a good next move, but I'm definitely not qualified to tell them what to do next in the technical sense.

The most accessible illustration from the paper. Pictures were not a strong suit.

Sunday, April 3, 2011

Ethnography Results, Week 8

I went back to the group that meets Saturday night at McDonalds. It was post-convention tabletop games night, so there wasn't any D&D. Nothing else Earth-shattering to report.

Microblog #44, "Why we Make Mistakes, Chapters 4&5"

4
Summary
This chapter is about how hindsight isn't as clear as we think it is. It shows several statistical studies of people misremembering to put themselves in a better light.

Discussion
This is...interesting. I'm familiar with the sports gambler phenomena (fantasy football, etc) but I find it tends to be far less prevalent among statistically minded people (who generally, are also the best at predictions). I wonder if this is generally true.


5
Summary
This chapter talks about how multitasking isn't really, with an emphasis on in-car distractions.

Discussion
Visual distractions are the worst problem for drivers; at some point we'll have voice systems good enough that eyes can be kept on the road while tasks are being carried out. I think the calls for more regulation are a little overblown; at some point, writing unenforcible laws just make you look silly (leaving aside entirely liberty-vs-security/safety concerns).

Microblog #43, "Things that Make us Smart, Chapter 1&2"

1
Summary
This chapter opens Dr. Norman's book Things that Make Us Smart. The two principal themes are over-entertainment at the expense of education (for example, TV news vs. newspapers, more flash but generally less content) and machines working more for machines than for people (user unfriendly interfaces failings being entirely brushed off on people).

Discussion
If it sounds like you've heard this before, it's because you have. This chapter has a little bit more of a rambling quality than Norman's previous work, we'll see if that continues.

2
Summary
This chapter expands on the first, then describes the three kinds of learning (accretion, tuning, restructuring) and explains the phenomena of optimal flow.

Discussion
I must admit that I was not in "optimal flow" as I was reading this. I did, however, note that he's still trying to reverse engineer things from the playstation (pg. 22).

Full Blog, "Coming of Age in Samoa"

Coming of Age in Samoa
Margaret Mead
William Morrow & Company, USA, 1928


Summary
This book is an ethnographic study of youth, especially girls, conducted in 1920s Samoa by Dr. Margaret Mead in the hopes of finding a population of that age range that wasn't caught up in the "noise" of Western society. She spends fourteen chapters discussing the results of her study, then follows it up with a statistical appendix.

Samoa is described as a very laid back culture, with the worst of the previous primitive culture having been eliminated by Western contact without having having yet acquired the hectic state of the modern West. Dr. Mead's report is summarized in chapters 13 and 14, with 13 being more a real summary and 14 being a soapbox speech on its applicability to the US of the 1920s.


Discussion
The significance of this book hardly needs to be expounded upon, as it is considered the classic of the genre. The two things I really didn't like about were 1. there was far more information in some sections than I wanted, as I'm sure can be recalled from my commentary on the appropriate chapters, and 2. I strongly disagree with the final chapter. Dr. Mead's ideas on child rearing are anathema to me. For future work, at this point I could just pull up some materials on modern Samoa.

Saturday, April 2, 2011

Paper Reading #19, "WildThumb"

http://chiblog.sjmorrow.com/2011/03/paper-reading-19-tell-me-more-not-just.html
http://csce436spring2011.blogspot.com/2011/03/paper-reading-19-local-danger-warnings.html

WildThumb: A Web Browser Supporting Efficient Task Management on Wide Displays
Shenwei Liu, Cornell University
Keishi Tajima, Kyoto University

IUI’10, February 7–10, 2010, Hong Kong, China

Summary
This paper describes a new system designed by the authors to allowing easier tab switching than the currently available systems for web browsers on wide screen monitors. It uses the extra space to display "augmented thumbnails" instead of traditional tabs, making the pages more visible and easier to click.

The thumbnails themselves consist of an image of the top of the page, with the site logo overlaying the upper left and the most prominent image on the page overlaying the lower right. The basic idea is clearly illustrated in the image below, taken from the paper.



A 9-user study conducted with experienced web browser users led to the conclusion, both from timing operations and from questionnaires issued to the subjects - that the system provided for a minor increase in switching speed.

Discussion
This idea was very interesting (which I think will be illustrated by the length of this discussion section!) if, in my view, somewhat flawed. The idea of making improvements to the tab system is of course broadly applicable and would be quite useful to anyone.

The concerns I have are as follows. First, look at the above screenshot. While the contents of the unopened tabs are more clear, the trade off is that they are also quite distracting, drawing the eye away from the primary focus. Second, the excess space being utilized here is going to vanish as more and more websites allowing widescreen browsing, meaning that you will be paying an increased price in terms of readability to get these augmented sidebar thumbnails. I am also somewhat concerned about the auto-page grouping algorithm, as I prefer to maintain control over the positioning of my tabs myself and find that the widely available click and drag functionality is quite adequate for this. This concern could be allayed by simply allow that algorithm to be toggled off. I must also note that I have caught the first grammatical error I can recall in one of these papers: in the 2nd sentence of the introduction "are" should be "is".

On a positive note, the augmented thumbnails do live to their billing, and could improve many different pages/functions that use thumbnails. The Chrome homepage, for instance.

In the future, two functions I want to see are the ability to view two different tabs from the same browser at one time (presumably, each taking up half the screen by default), and the ability to have a function otherwise similar to "favoriting" save and open multiple tabs at once.

Wednesday, March 30, 2011

Microblog #42, "Coming of Age in Samoa, Appendix III"

Summary
This chapter consists of Dr. Mead giving an outline of Samoan culture as it appeared to her during her time there.

Discussion
One of the most interesting chapters of the book. I found borrowing California's legal code to be particularly humorous.

Microblog #41, "Why We Make Mistakes, Chapters 2 and 3"

2
Summary
In this chapter the authors speculate on what makes things memorable. They advocate the view that it is the meaning they have.

Discussion
This chapter...I don't know about some of the conclusions they draw from their facts. Anyway, I correctly picked A.

3
Summary
This chapter describes snap judgments, and also areas such as answer test taking on exams where results seem to contradict common wisdom.

Discussion
This chapter just seemed backwards. The stuff they couldn't explain seemed obvious, and the stuff they tried to explain their explanations didn't sit well.

Take, for example, the pictures of the Senate hopefuls. They said they didn't understand why the guy on the left was more 'Senatorial' looking than they guy on the right, when it was pretty obviously the flag and the pin, at least to me. He looks like a politician, and the other guy looks like a regular guy. (Neat experiment: frame it so you can only see their faces. The guy on the right is still a regular guy, but the guy on the left now looks like a villain from a Sherlock Holmes novel.)

As far as test answers, I'm willing to bet that takers don't change responses unless they're sure. If the common wisdom was that changing answers was better, then they'd only stay if sure and changing would appear to be the weaker approach.

Tuesday, March 29, 2011

Paper Reading #18, "Embedded Media Barcode Links"

http://wkhciblog.blogspot.com/2011/03/paper-reading-16.html
http://pfrithcsce436.blogspot.com/2011/03/paper-reading-17-uimarks-quick.html

Embedded Media Barcode Links: Optimally Blended Barcode Overlay on Paper for Linking to Associated Media
Qiong Liu, Chunyuan Liao, Lynn Wilcox, and Anthony Dunnigan of FX Palo Alto Laboratory
ICMI-MLMI 2010, November 8-10 2010, Beijing, China

Summary
This paper is an overview of the concept of and then a detailed look at the implementation of a barcode reader allowing multimedia information encoded in traditional paper documents to be viewed on cell phones. The two previous paradigms in this field are explained in some detail. First, there is the barcode that must be exclusive of text and is therefore reader unfriendly. Second, there was the barcode in invisible toner, which required phones to scan the entire document just to find it.

The innovation here is a semi-transparent barcode that can be distinguished clearly from both the text and the background. This has the best of both worlds and the worst of neither, as it allows data to be collected without making the paper unreadable on absurdly large.

The paper contains a wealth of technical information, and those interested can peruse it. The improved blending coefficient making the transparency feasible is the best aspect, according to the authors.

Best picture from the paper. Transparent barcode (barely) visible in top right.


Discussion
This whole idea is very interesting to me, as this hadn't occurred to me before and is, frankly, pretty cool. Something subtle and at least conceptually easy, like what is done here, while still being a big advantage, is the pinnacle of innovation. I really like their idea. One thing they could improve would be more, better visual aids. I wasn't impressed with the illustrations.

I think the next thing I would want to do here would be to see this in action! User studies are important, and they don't appear to have done one yet. (I realize they self-tested, but that's never quite the same thing).

Full Blog, "Emotional Design"

Emotional Design
Donald Norman
Basic Books, New York, 2004

Summary
Calling this a full book blog is something of a misnomer, as we only read the first three chapters. The introduction and first chapter detailed the flaws in the earlier Design of Everyday Things, being not really a complete rebuttal Norman's earlier book but rather an acknowledgment that the design-only philosophy is perhaps too narrow. The second chapters explores the way emotion works in design, and the third visceral, behavioral, and reflective design and how to distinguish them.

Discussion
Emotional Design is classic Norman, starting out by making a bunch of good points in rapid succession and then repeating them over and over through the next two chapters and, I infer, the rest of the book. He suffered far more here from overstating his case than in points previous, and I felt his points were less well supported. The three layer model I find to be too stratified; the lines should be much blurrier than they are made here. Of course, you've already heard my opinion on that in class w.r.t. the midterm and probably aren't interested in a rehash.

Monday, March 28, 2011

Ethnography Results, Week 7

I took the time out of my schedule to go to AggieCon, figuring I would get in on the "flexible" D&D game and spend a couple of hours studying that (as well as the Con itself, having never been to one). As it turned out, the game was full when I showed up and a non-RPG (but still tabletop) game kind of developed at the table I was waiting at. Obviously, I never got to the D&D table. Still, the convention was an interesting experience, and I had a great time for the two and a half hours I could spare from the Capstone project and other obligations. What I learned there reinforces what I had seen elsewhere during the course of this research; gaming groups will let just about anyone play, and are generally really easy to get along with.

I'm thinking the last big thing that needs to be written up for this project is a good example of a game session; it doesn't have to be a full, detailed transcript but anything to give our faithful readers a good idea of what one of these things actually look like. (Although, the kibbitzing may be hard to record properly, and it seems that there are always many inside jokes). I intended to write such a post today, based on a partially free-form game my brother and I played over Spring Break (since I didn't really learn anything new from it, I didn't , but I've decided that is better saved for the grand, Gamma World finale Stuart has proposed (assuming it happens, which is iffy given our current workloads) for week 8.

So, instead, today I'm going to play the inside man and write about some things that I've learned as a game-master for the groups I have played with in the past. I'm going to use the aforementioned game as the format for examples. I'm going to present it as a list of things I did and tried to do, with a little of my thought process, and let the reader fill it in from there, rather than trying to self analyze too deeply. Oh, the setting was (relatively) hard sci-fi. Hopefully this will give a better understanding of how these games operate, and what an effective GM has to do.

*Improvisation. This was one of those sessions that materialized on the fly, but the pacing still remained excellent and the setting worked well. Considering that I had about one sentence of plot written when it started I thought was great.
*Engaged the player(s). With only one player, who I knew well, this was easy. Give him a girl to rescue and a flag to paint on his shiny new spaceship (we'll get to that in a minute) and he can really get rolling.
**For example, my brother is one of the "Pluto is still a planet" crowd. One line about how Pluto is really "sketchy" because all the corporations pulled out to "fund real planets" and we have gone from "space pirate" to "space pirate with a cause".
*Gave the villain(s) a distinctive trait. The Earth Secret Police have had their eyes removed and replaced with computer sensors. It served to give them character as well as make them "faceless".
*Rewarded creativeness. I should have seen "I throw four bombs and four fakes out the airlock at the secret police" coming, but I didn't and they wouldn't have either.
*Didn't arbitrarily kill off or otherwise eliminate PCs for no reason.
*Made locations memorable (or at least tried to). Pluto was the best one, but I'd like to think some other elements can across well as well.
*Let the players add to the setting while its in progress. I had no idea there were lifelike holograms available, but if the players pitch something reasonable work it in. It makes everyone more engaged, and helps really flesh things out.
**On the same note, the character known only as "Always Angry" (A space punk pirate from Venus) got some unexpected character development when we realized we'd both switched to calling her "Almost Angry" without realizing it.
*I'm also really proud of how Switzerland was the only politically independent body in the Solar System. I'm not sure why, but it worked great.
*This was a very crunch light game. We must have been 7-8 hours in before rolling a die.
*I've found sometimes that working in a world that exists only for that campaign as opposed to a well defined and established one that has to stay somewhat the same can make for really great games. I can have my characters right at the heart of the universe and still make it make sense.
*Always, always, always have read or watched a lot of genre relevant fiction that your players haven't, especially when knowing you are going to have to improvise. That doesn't mean you have to be better read, but it does mean there can't be one to one overlap and you have to know what they know. This lets you borrow elements of the plots and/or settings, with the necessary tweaks to fit, quickly. It also gives you a bunch of character names to assemble quickly.

DANIEL STOP READING HERE

Sources I was heavily inspired by include:
The Lensman series. (E.E. Smith)
The Voice of the Whirlwind
The Moon is a Harsh Mistress
Babylon 5
...and most of the names were random recombinations of given and family names from characters in John Wayne westerns.

Microblog #40, "Coming of Age in Samoa, Chapter 14"

Summary
In this chapter, Dr. Mead rails against the evils of parents teaching their morals to their children, using the perceived laidbackness of Samoa as her lever.

Discussion
This book would be considerably improved without this soapbox chapter tacked onto the end. Permission to choose may require permission to choose wrongly, and all the pain that can entail, but how much better that than to have no choice at all!

Paper Reading #17, " Estimating User’s Engagement from Eye-gaze Behaviors in Human-Agent Conversation"

http://dlandinichi.blogspot.com/2011/03/paper-reading-16-tag-expression-tagging.html
http://chi2010-cskach.blogspot.com/2011/03/paper-reading-18-speeding-pointing-in.html


Estimating User’s Engagement from Eye-gaze Behaviors in Human-Agent Conversation
I. Nakano Yukiko, Seikei University
Ryo Ishii, NTT Cyber Space Laboratories
Presented at IUI '10, February 7-10, 2010 in Hong Kong

Summary
<very technical, confusing?, long>
<discusses the basis for this paper, eye movements as feedback>
<past work summarized, very briefly>
<Experiment structure - cell phone sales WoOz>
<technical details>
<conclusion - probing + proper timing>

This is a long and very technical paper compared to those from the previous conferences. In it, some Japanese researchers attempt to quantify eye movements as a measure for engagement, presumably in the hope that in future algorithms using the data gathered and more like it will be able to better interact with humans.

The experiment conducted was on the Wizard of Oz model, where a human acting behind a computer interface played the part of the interacting algorithm to see what was effective and how it would be used. The theme was a cell phone salesman trying to keep a subject, also motivated by a 1,000 yen reward for being correctly able to identify which cell phones are most popular with which age groups, engaged in the conversation. Engagement was measured by both the subject and an observer, each of whom had a "boredom" button, referring to the state of the subject.

I see no point in including the many detailed graphs - they would take up far too much space - but I will note that the conclusion was that the best way to maintain engagement is with properly timed probing questions. Also worth noting is a very significant disconnect between when subjects and observers pushed their buttons.

From the paper.


Discussion
The paper is significant because improved interaction with humans is a clear goal of CHI. The worst flaw is that is very difficult for the un-initiated to read - although this may be the venue rather than a flaw per se. My next move in their position would be to repeat the Wizard of Oz experiment with a different frame than cell phone sales and see if a different topic gets different results.

Tuesday, March 22, 2011

Paper Reading #16, "A Practical Pressure Sensitive Computer Keyboard"

http://wkhciblog.blogspot.com/2011/03/paper-reading-15.html
http://chi2010-cskach.blogspot.com/2011/03/paper-reading-16-using-fnirs-brain.html

A Practical Pressure Sensitive Computer Keyboard

Paul H. Dietz, Benjamin Eidelson, Jonathan Westhues and Steven Bathich of Microsoft Corporation
Presentation venue not specified in paper, presumed to be UIST 2009

Summary
This paper is a report on research done in the area of an improved keyboard, in this case using pressure sensitivity to allow for increased features and versatility. Hopefully, this would all be achieved without paying a price in intuitiveness. For motivation, the authors lament that the current keyboard is still functionally the same as at 'the dawn of the computer'.

The authors follow by summarizing the prior work in this field, and explaining the technical details of the construction of current keyboards. These details are omitted, as I feel technical details are secondary and I want to focus on what this paper is doing, not what others have done before.

This keyboard is built as a 'modified flexible membrane design', with the goal of detecting increasing pressure only after the point where a conventional keyboard would have registered the key as pressed. I have also admitted the EE major's portion of the paper, because I don't understand and don't think I could do it justice. I will focus more on implications.

Suggested applications include gaming, where you could push a key harder to go faster, emotional IMs where font size scales with the force of key presses, and allowing minor increases in proficiency and accuracy in conventional typing.
Taken from the paper.

Finally, they discuss the potential obstacles to market viability and why they think this keyboard overcomes them. Specifically, cost increases are minor compared to conventional keyboards, no functionality is lost, and pressure sensing can be adapted by software allowing users to adapt slowly.


Discussion
While this approach has some merit, I think they're trying to force a niche a little bit. After all, the hammer and nail have been along for how long without being fundamentally replaced? The keyboard has remained unchanged because it does its job and does it well. The cutoff for sensing pressure being the point where a conventional keyboard would activate is the best idea in the paper.

This paper is interesting because of the potential applications, and significant because anything affecting keyboards on a wide scale will be very significant. That said, I have some serious concerns about the device as presented here.

I think they can be summed up fundamentally as concerns with moving from a digital control to an analog control, which would seem on the face of it to be backwards. I think that although it may be only marginally more expensive to produce, upkeep and replacement costs would increase as this keyboard would begin to lose precision well before the point where a current keyboard - where it either is pressed at any time or isn't - would finally die. Also, giving different results for pressing the keys harder would encourage users to be rough with the keyboards, further increasing these costs.

This lack of precision is pretty concerning to me as a gamer, I think. I don't emotional response from my controls, I want precision. I know exactly how fast I am going when I use any forward motion key, and I am concerned that it would make my movements awkward in games I already know to use such a device. Of course, that would probably just mean an adaptation period, so this isn't nearly as much of a concern as the above overhead.

One thought, which is mostly funny and not a criticism, since it could be easily corrected for, is that not all keys are conventionally pressed with the same force. Would J usually be larger than Q in the emotional IM proposed?

The biggest weakness of the paper is the lack of any testing, with users or otherwise, so far as I can tell. Accordingly, my idea for future research is to do extensive testing.

Finally, if the stenographers today don't use QWERTY, what do they use? Was the wording of this paper just misleading?

Microblog #39, "Why We Make Mistakes, Intro. & Chap. 1"

Introduction
Summary
Hallinan uses a pediatrician, anesthesiologists, farmers, and global warming to set up the premises of his book. Generally, it is that by understanding how both humans and the environment contribute to mistakes we can minimize them.


Discussion
Sounds familiar, doesn't it? So far, this sounds like another angle at something we've been hearing all along. I hope you weren't expecting profound insights on the introduction.

1
Summary
This chapter mostly consists of providing examples of the kinds of mistakes that are made. Examples include petty things like finding beer and serious things like finding tumors. A cause pointed at is short attention spans.

Discussion
The tumors thing is pretty disturbing...the airport screeners less so. I guess everyone already knows you can get whatever you want on a plane if you try. I hope this guys finds some new points to make and doesn't just read like a rehash or Norman's slips & errors chapter.

Microblog #38, "Coming of Age in Samoa, Chapter 13"

Summary
Dr. Mead discusses her view of the contrasts between the place of girls in Samoan and American society, and what she thinks we can learn from them. Or, rather, what we could have learned from them 80 years ago.

Discussion
This whole chapter just seems off to me. I guess it just goes to show that Dr. Mead and I are operating from very different moral codes.

Full Blog, "Obedience to Authority"

Obedience to Authority
Stanley Milgram
New York: Harper & Row, 1974

Summary
The book describes in great detail the results of Stanley Milgram's now famous fake electroshock experiments, and then describes in depth Milgram's psychological theories as to why he got the results he did.

The first section describes many different variations on the original experiment, and shows all the data (how many people cut off at which shock level in each variation, etc) in chart form for all of them. It also describes the cases of a few individuals in transcript level detail.

The second draws heavily on evolutionary theory to explain why people need the hierarchy Milgram feels his experiment shows they do, and what the effects of being "morally asleep" are. He cites additional supporting evidence for his positions.

Discussions
I'm going to say the nice things first. First, Milgram did a far better job of explaining his experiment and making his case than Mrs. Slater did for him. No surprise, but I felt it worth mentioning. Second, the level of detail was excellent.

Now the two criticisms: First, I still don't think the experiment shows what he thinks it does. I don't think the driving factor is obedience to authority; Authority is surely a factor, but I don't think it overrides morality in the way Milgram thinks it does. Rather, what I think is shown here that if you give people diametrically opposite social cues you will succeed in confuse them, hardly a groundbreaking revelation, but not really what most people seem to think this experiment shows.

Compare it to the experiment from a different chapter of Opening Skinner's Box. In the one I'm talking about, one subject, sometimes with one to five actors paid not to react, is sitting in a room where fake smoke is pumped in. Of course, the subject always reported it when by himself, but rates dropped precipitously when the actors were present. This is the same thing; when there were individuals in the room not acting as if there was fire, he at some level questioned (correctly) if there was actually a fire. Note that this situation shows very similar results in spite of having no authority telling him not to react to the smoke, and no strong moral component (rather, self preservation being the driving factor here).

I think that a far better explanation for the results here is that, at some level, the subjects questioned (correctly) if there was actually any danger to the subject, whether they realized it or not. FWIW, that destroys the Nazi analogy Milgram tries so hard to make entirely.

My second objection is to the assertion that beating one person to death with a rock is morally less objectionable than killing ten thousand with artillery. With both actions happening entirely in a vacuum I'd agree, but that isn't what is implied. Rather, beating one person to death with a rock is almost certainly a murder, where the death of thousands to artillery is a military action.

Now, if you remember your Clausewitz, you will recall that war is an attempt to achieve political aims by force, whereas a murder is either utterly pointless or serves only selfish ends. The simple fact of the matter is that killing 10,000* to save a million is morally less objectionable (at the very least!) than offing one man so you can have his wallet. Milgram's blanket assertion otherwise, frankly, is offensive to my intelligence.

*Don't think for a second I'm asserting that this is something that is fun or should be taken or done lightly; I am, however, asserting that it is sometimes necessary.

Microblog #37, "Obedience to Authority, Chapters 9-14"

Summary
The second half of Milgram's book is an exposition on his theories as to why he got the results he did. This is of course the logical conclusion to the first half, where he explained the results.

Discussion
You know, to closed this book with "To focus only on the Nazis...is to miss the point entirely" is a pretty bold statement for a guy who talked about them so much. I also challenge the validity of his conclusions, since I still feel my previous objections to the experiment's validity hold.

Monday, March 21, 2011

Microblog #36, "Obedience to Authority, Chapter 1-8"

Note: The assignments page is a little off kilter; this is listed as #34, but so is Chapter 10 of OSB.

Summary
These chapters describe the Milgram electroshock experiments in far greater than previous treatments. Examples include charts, variations, and descriptions of specific incidents.

Discussion
While I still feel the experiment is flawed at its core, the variations were interesting. Dr. Milgram tries far harder than is warranted to force a connection between his work and the Holocaust. Probably, this is an attempt to make his work seem more relevant.

Paper Reading #15, "A Reconfigurable Ferromagnetic Input Device"

http://chi2010-cskach.blogspot.com/2011/03/paper-reading-15-interactions-in-air.html
http://dlandinichi.blogspot.com/2011/03/paper-reading-15-eddi-interactive-topic.html

A Reconfigurable Ferromagnetic Input Device
Johnathan Hook, Newcastle University
Stuart Taylor, Alex Butler, Nicolas Villar, and Shahram Izadi, Microsoft Research Cambridge
Paper presentation not specified therein, but presumably UIST 2009.

Summary
This paper describes a general sensing technique using ferric materials over sensor coils to allow computers to detect manipulations of said ferric materials. The goal is to provide totally reconfigurable input devices, which could be customized at will for any user or any application.

I don't feel that the specific technical details, which are provided, are of particular interest to the likely audience of this blog post. One area of interest, however, was their ideas for future work which were remarkably specific. These included a tangible electronic music sequencer and joysticks that could sense the intensity of grip, as well as the less unique haptic feedback possibilities.

Discussion
While a completely reconfigurable user interface is something of a Holy Grail, at least in theory, but I don't see enough here yet to be convinced that the authors are Galahad. I think the most important future direction for the work would be to get any proof of concept program running using a completely custom interface that is reconfigurable while in use (note that both given examples are pre-set). That would take this from interesting sideshow to a big leap forward.

The big weakness of this paper was that both P.O.C.s relied on the ferrofluid bladder and not on general iron objects.

Microblog #35, "Coming of Age in Samoa, Chapter 12"

Summary
This chapter describes the roles of adult women, old people of both sexes, and the taboos placed upon pregnant women.

Discussion
The most interesting thing noted in this chapter was that the change in status for non-titled individuals was determined more by age than by marital status.

Full Blog, "Opening Skinner's Box"

Opening Skinner's Box
Lauren Skinner
Published 2004, Norton & Company, New York

Summary
In this book Mrs. Slater gives us summaries of the facts about and her musings on several famous psychological experiments of the last century. Notably included are Milgram's obedience/electroshock experiment and Rosenhan's deconstruction of American psychological hospitals.

Discussion
This book is interesting and significant because the subject material is interesting and because it is by far the most important reading in terms of in-class discussions. The worst fault of Mrs. Slater's writing is a tendency to rely too heavily on her own prose and spend to little time on the facts; several of the experiments themselves had flaws which I discussed in the appropriate micro-blogs. Mrs. Slater also has an unfortunate tendency to stick to the traditional narrative of some events rather than question it; I think this was most apparent in the chapter dealing with Kitty Genovese.

I don't think "future direction" is a necessary section here, so I will omit it. Instead I will close by saying that although I think this book has less practical value to computers than, say, Design of Everyday Things, it was very interesting and informative and I feel smarter for having read it.

Ethnography Results, Week 6

Nothing to report; no games happened among the groups we were tracking in College Station at a time that allowed me to attend, due to Spring Break.

Sunday, March 6, 2011

Ethnography Results, Week 5

This week I had meetings all day Saturday to work on the capstone project (we have a Hello World running on our Galaxy Tab, now) and was unable to attend either the morning game Stuart was planning to go to or Wesley's Saturday evening game.

Fortunately, one of my old roommates was coming through on Friday for grad school orientation and pitched getting some people together to play a session. As such things go, the instigator himself wound up having to drop out - you can certainly chalk that one up to the difficulties of getting everyone on the same page! - but the rest of us decided to push ahead, since we were all looking forward to it.

The group consisted of myself and one other guy in my apartment in College Station and one other friend of mine VCing in from where he lives these days. There were several changes from games I've played in the past: First, although we've done some gaming by voice chat before, we've rarely done much. Second, the DM for this session was not the guy who usually plays that role, but a less experienced DM (while still being a veteran player). Finally, none of us had played Call of Cthulhu before.

Call of Cthulhu is unique in that the players are supposed to wind up losing - it's of course based on Lovecraft's work - which is, in theory, a conceptual break from games such as D&D. Of course, it's all up to how the DM runs the game anyway.

We started at 6:50, against a scheduled start time of 7:00. This group never really had a problem in the past with getting started late, which may reflect the personalities of the people involved (specifically, my personal predilection for grossly conservative time estimates) or may reflect that since we were all in the intended geographic locations by 5:00 getting the game set up there were no transportation related delays.

The game session started with the usual level of out-of-character joking around, which didn't really abate as Derek observed in the Saturday night game a couple weeks back. The plot revolved around our characters going to investigate a house where multiple people had gone insane and which was unsurprisingly rumored to be haunted.

Call of Cthulhu was a good system for the setup we had going. Unlike D&D, combat is sufficiently unstructured that it's not required for all players to be able to see a grid of squares or hexes - obviously a bonus when playing without visual communication. On the other hand, the setting thinks that the DM and the players should be trying to build a horror novel atmosphere, which didn't really wind up happening.

I think the strongest conclusion to be drawn here is that the personality of the players has a major effect on the progression of games.

Thursday, March 3, 2011

Microblog #34, "Opening Skinner's Box, Chapter 10"

Summary
First, Mrs. Slater tells us the history of lobotomy. Then, she tells us it's better than Prozac.

Discussion
I can't agree with the argument of the second half of this chapter. I think there's a real fundamental difference between cutting out tissue and treating them with drugs, in spite of what Mrs. Slater says. In one case, we know there are consequences, in the other I suppose we can't prove there aren't. Those aren't the same at all.

Microblog #33, "Coming of Age in Samoa, Chapter 11"

Summary
This chapters covers sources of potential conflict for Samoan girls. First it describes the various social pressures, then covers the cases of some specific delinquents.

Discussions
I find it hard to believe the rate of such things is as remarkably low as Dr. Mead says it is. Either the sample size is a lot smaller than I guesstimated, she missed something (the Samoans were noted to be a recalcitrant bunch), or I'm just weirded out.

Paper Reading #14, "PhotoelasticTouch"

http://stuartjchi.blogspot.com/2011/03/paper-reading-13-cosaliency-where.html
http://chi2010-cskach.blogspot.com/2011/03/paper-reading-10-bonfire-nomadic-system.html

PhotoelasticTouch: Transparent Rubbery Tangible Interface using an LCD and Photoelasticity
Toshiki Sato, Haruko Mamiya, Hideki Koike
University of Electro Communications, Tokyo
Kentaro Fukuchi
Japan Science and Technology Agency
Venue of presentation not specified in paper; presumably UIST.

Summary
PhotoelasticTouch  is designed to overcome the current limitations of tangible interfaces. The idea is for a tabletop system to be able to utilize an overhead table and an LCD screen to recognize deformations of an elastic object held in the user's hands. Any 3D elastic object can be used, either one provided or one created by the user (In this case, the researchers polyethylene and silicon rubber, to get the ideal balance between rigidity and deformability). Its position is detected by the camera detecting refraction of light from the table, and then displayed on the screen. Deformations are easily picked up and measured by the effect the stressed elastic has on the polarized light; the process is related to that used in photoelectric stress analysis.

Example applications given were a pressure sensitive touch panel (which I infer from the pictures to mimic the operation of a traditional touch panel by placing a sheet over the whole table, although use of more than one finger at once was certainly possible here) with which users were successfully able to manipulate a 3D object by pressure only, without sliding their fingers. The second application used an elastic face to allow interaction with a digital one. This wasn't much elaborated on.

The third and last was most fascinating. Taking a small piece of the elastic material, the user waves it in the air above the table, and when they pinch it the shape's shadow appears on the screen below. This enable the user of any shape provided as a paintbrush of that shape.

Some mentioned drawbacks of the system were the camera having difficulty functioning over a dark screen (as it runs on the polarized light from the table), occlusion of the camera by hands, and usage negatively impacting the transparency of the material. The light issue and the deteriorating transparency the researchers believe will be easily corrected.

Discussion
This project is pretty novel and has a lot of good ideas, although as the flaws section points out this was still just a proof of concept in a lot of ways. The best potential for applications here is for those that require specialized controllers; the transparent materials used here have to be less expensive than directly specialized electronics. I don't think I would want this for my base interface, but it can certainly be set to complement the mouse & keyboard rather than compete with them.

The biggest weakness of this research is the hands getting in the way of the camera. I realize this was already mentioned above, but it really is going to be the most difficult to solve and seriously impacts both degrees of freedom and precision of the input.

The paper itself was well done, I thought. It was well organized, readable, not given to excessive hyperbole, and unlike some previous UIST papers discussed applications and not just hardware.

The next direction to take this research should be, in addition to those the researchers mentioned themselves, trying to find some kind of material that meets the deformation requirements but is easily reconfigurable. That would massively increase the versatility of this platform.

Source: Paper

Tuesday, March 1, 2011

Microblog #32, "Opening Skinner's Box, Chapter 9"

Summary
This chapter starts with Mrs. Slater explaining how a lobotomy victim taught science about the Hippocampus, and ends with her telling us how soon we're all going to get a little red pill that'll give us all perfect memories.

Discussion
Mrs. Slater continues wandering off into the psychological wilderness in this chapter. Just a whole bunch of depressing musings on how depressing everything is, which has frankly constituted a large part of this book.

The most relevant comment I have on this chapter comes at the very beginning. How on Earth would anyone ever think sucking out portions of someone's brain was a good idea? I mean, there's some things we've learned as time has moved on, and then there's a gross frontal assault on common sense.

Microblog #31, "Coming of Age in Samoa, Chapter 10"

Summary
This chapter gives brief profiles of several of the girls Dr. Mead studied. The focus seems to be on their sex lives, although some other elements are mentioned as well.

Discussion
What we have here is a long list of things I did not and do not want to know regarding the, ahem, "details" of life in early 20th c. Samoa. However, if I had to extract one interesting thing from this chapter, it would be the differences in temperment and skills of the girls from the biological families vs. the large Samoan ones.

Paper Reading #13, "Mouse 2.0"

http://dlandinichi.blogspot.com/2011/03/paper-reading-14-madgets-actuating.html
http://chi2010-cskach.blogspot.com/2011/02/paper-reading-13-semfeel-user-interface.html

Mouse 2.0: Multi-touch Meets the Mouse

Nicolas Villar, Shahram Izadi, Dan Rosenfeld,
Hrvoje Benko, 
John Helmes, Jonathan Westhues,
Steve Hodges, 
Eyal Ofek, 
Alex Butler, 
Xiang Cao, and 
Billy Chen of Microsoft Research
Presentation venue not specified in paper.

Summary
This paper, as suggested in the title, presents research into expanding upon the current computer mouse. Five different implementations are presented, each incorporating "a different multi-touch sensing strategy". The authors emphasize that their goal is supplementing, not replacing the mouse; that touch screen requires more effort and creates fatigue, making it less than ideal; and that the goal of these designs is to produce a usable design allowing for a cursor with more than two degrees of freedom. The five implementations follow.

FITR Mouse (all images taken from the paper)


FITR Mouse
The acronym is for Frustrated Total Internal Reflection. The mouse consists of a curved piece of acrylic set in front of an infrared camera, which tracks the movement of fingers across the acrylic. Which movements on the surface map to what actions of the mouse is not explicitly stated. One of the diagrams makes it clear that there are physical buttons, but doesn't indicate which end of the acrylic they are located at. If the rear, the motion may be counterintuitive. The limitations noted are decreased ergonomics, and problems operating in areas with significant IR light already present. However, the mouse did accomplish basic design goals.

Orb Mouse (all images taken from the paper)

Orb Mouse
The Orb Mouse has an IR camera pointing in all directions under the dome, with the stated advantages of allowing all five fingers to be employed, avoiding broad environmental "noise" (as the IR light is generated internally here), and improved ergonomics relative to the FITR. However, the mouse is more susceptible to objects in proximity causing interference, and not just those that generate IR. It is specified that you can click the whole mouse, but not whether the mouse is limited to a single-button functionality. 

Cap Mouse (all images taken from the paper)

Cap Mouse
The Cap Mouse, or capacitive mouse, does not use the IR approach of the previous iterations but rather uses "a flexible matrix of capacitive sensing electrodes" to determine the position of the user's fingers. Cap Mouse is explicitly single button (in the front) and has the advantages of lower bandwidth required, complete immunity to ambient light, and increased compactness. However, it lacks the precision of the optical mice.

Side Mouse (all images taken from the paper)

Side Mouse
Side Mouse uses a wide angle camera lens to detect when the fingers of the user touch the table around it. It is IR, as in previous implementations described here. It is a single click device, with clicking performed by pressing down with the palm. The primary advantage of side mouse is that the scope of possible interactions is not limited by the physical size of the mouse itself.

Arty Mouse (all images taken from the paper)

Arty Mouse
The Arty Mouse, or articulated mouse, has the users place the mouse fingers on the projections, as you can see above, and then tracks the movement of those projections and the main mouse itself relative to each other by the means of an optical sensor under each element. It is the highest precision mouse described here. Arty Mouse was the most popular among test users.

The researchers offered a description of the multi-touch space which they were trying to manipulate, along with how they were attempting to manipulate it. This description, however, seems incomplete. I suspect a demonstration would be very helpful.

Finally, they outlined some conclusions of their research. Most surprising to them was that the more radical departures from conventional mice were better received. This may have been because users didn't have to relearn how to use the hardware, which was a particular problem with Cap Mouse.

Discussion
I think there are a lot of good ideas here, but overall I think that mice like this aren't going to be useful without a new paradigm of things for them to manipulate. Let me explain what I mean with an analogy.

In the current, 2DOF, mouse setup I can move around the two dimensional screen of the computer/building. In order to get to different levels, I have to go find an icon/staircase an utilize it. While a mouse with more degrees of freedom would allow me to fly around the floor, they ultimately wouldn't give me any advantage as anything I want to use is sitting on the floor and I can't to another by flying - I still have to use the stairs. Of course, if the floor was designed to accommodate flying types, being able to fly would be quite helpful. I don't which would have to be chicken and which would have to be the egg in this case, but its food for thought. (And, of course, if I know how to use a command line I can just teleport around the place.)

The details of this paper I can't really comment on too much, since I am very much a software guy and this is very much a hardware paper. The biggest weakness of this piece is a failure to have either a very clear description or a very clear demonstration of how manipulating each mouse mapped to manipulating the screen. I was unable to follow the discourse on that as written.

The next step, I think, would be to correct the flaws their research illuminated with their mice and begin to try to determine the implications of what multi-DOF mice would be. That is, what kind of software can we build to use this, and what function would that software server?