It feels like I just returned from the annual ASEE meeting. I presented a paper about a topic near and dear to my heart: the new undergraduate major in Human-Centered Design and Development (HCDD) I spearheaded at Purdue.
The paper tells the design story (birth story) of the new program. I took a user-centered approach to curriculum design, since that’s what I know best. I think one of the most valuable tools that came out of it was the vision persona. And, of course, the program itself. 🙂
The paper is available online (you can read it here) and the slides I used are below.
I am so pleased that we launched the redesign of DIA2 and the new homepage this weekend! It’s been a long and fun journey!
DIA2 is a Web application for knowledge mining and visualization of the NSF funding portfolio. Anyone can use it to explore where NSF funding goes, how it’s distributed geographically, across NSF divisions, across topics, and institutions. You can explore collaboration networks of researchers who worked together on proposals, identify who’s well connected in a field, and figure out what NSF programs and program managers have funded research similar to yours.
I’m happy to have been involved with DIA2 since the very beginning, as a co-Principal Investigator (co-PI). I led the UX team for the project. We started with user research to understand user needs, and moved through ideation, wireframing, testing, the whole 9 yards. It’s been very rewarding to hear users say, “This thing reads my mind!” and “I feel it was designed for ME!” Perhaps best of all, DIA2 gave me the opportunity to work with and mentor many talented students. All DIA2 “employees” have been students working under a PI’s supervision. I am so proud of them!
If you’d like to, go check DIA2 out for yourself – it’s available for all at DIA2.org.
Or, read some research papers about it:
Using visualization to derive insights from funding portfolios. In IEEE Computer Graphics and Applications, 2015.
DIA2: Web-based cyberinfrastructure for visual analysis of funding portfolios. In IEEE Transactions on Visualization and Computer Graphics, 2014.
Portfolio mining. In IEEE Computer, 2012.
I came across this article in HuffPo about a new app some students created that can help you identify your most toxic friends. They call it an art project, but I seem to recognize here a common structure for research projects in HCI. So, if you’re my student looking for thesis ideas, read on. 🙂
The recipe goes like this:
- Take a problem or issue from the social world (e.g. toxic friendships, collaboration, long-distance family relationships, etc.)
- Create a technology that mediates how people deal with that issue – ideally, the technology should improve the human condition or raise critical questions.
- Evaluate the technology.
- As a result/consequence of evaluating the technology, illuminate some aspect of and contribute knowledge to #1. And/Or, at the very least, derive design implications for this type of technology.
Some examples of papers following this structure:
I recently watched this TED talk by Daniel Kahneman about the experiencing self and the remembering self.
Apparently, they’re quite different. The experiencing self is the one who lives and feels in the moment. The remembering self is the one that engages in retrospective sense-making and decides, post-facto, whether the experience was good, fun, etc. It is the remembering self’s evaluation that informs future decision making.
This has enormous implications for UX evaluation. Even if the experiencing self has a (relatively) bad time, as Kahneman explains in the talk, but the remembering self makes a positive evaluation, the experience is remembered as good. We can measure UX in the moment, and track eye gaze and all that jazz. But ultimately, what really matters for future decisions is what users take away from the experience and how they evaluate it after it’s over. This is good news. It means that users may forget or put up with a few frustrations – and still assess the experience well, especially if it ends well. It also means that the research framework for website experience analysis that I created back in 2004 is valuable, because it focuses on how users make sense of the experience and what they take away.
I noticed that the Discussion chapter is one of the hardest to write, especially when you are so close to the results and your head is wrapped up in all the data. Writing the Discussion chapter requires taking a few big steps back and seeing the big picture. For that reason, I often write it with my eyes closed, without looking at the results. Or I ask students to imagine they ran into a friend or colleague at a coffee shop. They don’t have the manuscript or slides on them. They just need to explain to the colleague, without using numbers, or tables, or figures – just narrative – the following:
- what they did (briefly)
- what they found – what were the significant, memorable findings?
- what do the findings mean? – what does it mean that X was rated as 4.61 and Y was rated as 3.93?
- do the best of your knowledge, why do you think that is? what accounts for these results?
- why are the findings significant/important/useful? how can they be used, and who can use them?
This is the part where you sell your research. But then, a word of caution:
- what went wrong?
- what should we keep in mind as we buy into your findings? how do the limitations of your study affect the results? (this is, indeed, the Limitations section)
Think of the Discussion chapter as an executive summary. If it is the only thing I read, I should get a good understanding of what you found and why it matters. You should explain it to me clearly, in a narrative, without restating your results.
And now that we are so close, I might as well address the Conclusion chapter. It should accomplish 2 things:
- Summary of the entire project – this can be an extended abstract. What you set out to do (purpose of research), what you did (methods) and what you found out (main results).
- Directions for future research. I learned something great about this in a thesis defense yesterday. Think beyond replicating your study and overcoming your limitations. Think beyond better ways of addressing the same research questions. Now that we know what your research results are, what are other interesting questions we should address? What other issues and questions arise?
I’ve said this so many times in the past few weeks that I felt writing a blog post I can refer students to might be helpful. Please feel free to add your advice or questions in the comments below.
The Chronicle of Higher Education published today an article about a course on Information and Contemplation taught by David Levy at UW. Interesting to see that Levy’s previous work on effects of meditation on multitasking was actually funded by the National Science Foundation. Interesting to see that ACM CHI and Graphics Interface publish this kind of work.
This post explains an alternative research protocol, website experience analysis (WEA).
Website experience analysis is a research protocol (set of procedures) that can help researchers identify what specific interface elements users associate with particular interpretations.
WEA focuses on the messages that users take-away from their experience with the interface.
All interfaces try to communicate something, such as:
- you should trust this application with your credit card data
- you should come study for a MS degree in CGT at Purdue
WEA allows you to find out:
- whether the interface actually communicates this message – do people actually take away the message that you intended, and to what extent?
- what specific elements of the interface users associate with those particular messages (trust, CGT is a good program, etc.)
The WEA questionnaire is based on prominence-interpretation theory. It works with pairs of items that ask:
- Ratings of user perceptions (e.g. trust – on a scale of 1-10)
- Open-ended: what about the interface makes the user feel this way?
WEA is based on a much more complex theoretical framework of the website experience. The framework breaks the website experience down into two major dimensions: time and space. WEA then explains the phases of the experience as they unfold across time, and the elements of the website space (elements are categorized according to element functions). The theoretical framework is likely only valid for websites, because the experience with another type of interface, even though it may have the same three main temporal phases (first impression, engagement, exit) will likely differ in terms of the steps within those phases and the nature of the spatial elements and their functions.
WEA is different from a regular questionnaire because it connects perceptions with specific interface elements. Questionnaires will tell you whether the user trusts the product, but they won’t provide specific feedback as to what particular elements may account for that perception.
WEA is modular, which means that a different battery of items can be used, depending on the focus of the research. I used WEA in 2 contexts:
- To evaluate the experience of visiting organizational websites. Here, I used the 5 dimensions of good relationships between organizations and their publics: trust, commitment, investment, dialog, etc.
- To evaluate whether emergency preparedness websites persuade users to take emergency preparedness actions. Here I used a battery of items derived from a theory of fear appeals (EPPM) and assessed whether users perceived there is a threat, believe they can do something about it, believe the recommended actions would be effective, etc.
I think WEA would provide excellent feedback about how prospective students perceive the CGT department, based on their experience with the website. It would be very valuable to find out exactly what about the website makes them feel that:
- they would benefit from a CGT MS
- they would fit in
- they would have a good educational experience
- etc. – we have to determine the relevant set of items. Ideally, we would have a theory to guide item development.
WEA can be used with other research questions, such as: How do HR managers look at job candidates’ online information? (hello, Jack!)
WEA can be improved upon to better tap into emotional aspects of the user experience. It can be modified to be a more inductive approach, that elicits emotions and interpretations from users rather than asking about specific interpretations (such as trust, etc.) – thank you, Emma, for these suggestions!
If you would like to read more about WEA, you can find the relevant citations in Google Scholar. I can provide copies of the papers if you don’t have access to them.