I had the pleasure to work with colleagues from Microsoft on a CHI 2019 paper, Guidelines for Human-AI Interaction. This post provides links to the paper, related resources, and blog posts:
Problem: The National Science Foundation (NSF) needed a way to help them understand and evaluate their funding portfolio.
Context: NSF is a federal agency that provides, on a peer-reviewed, competitive basis, funding for research, in order to advance the mission of science.
My role: I was a co-Principal Investigator on this $3 million, 4-year project. I lead the UX research and design.
- Create a tool for monitoring and evaluating the NSF funding portfolio that would serve NSF’s internal and external users.
- Advance the mission of science by generating fundamental knowledge and research publications in the area of portfolio management.
Phase 1: NSF employees
In the first phase of the project, we focused on serving the internal NSF audience. My contribution to the project was as follows:
I planned & conducted formative research in the field in order to identify user groups inside the NSF and understand their needs. I focused the data collection around questions such as:
- What are decisions made by NSF employees on a regular basis?
- What information do they need to make those decisions? Where is that information located, in what formats, and how do they access that information?
- What are the larger goals of NSF employees? What makes them feel successful? What are their big motivators?
- I closely supervised a graduate research assistant who assembled the user modeling report containing 3 personas. We published a paper in the Proceedings of HCI International that presents the results to an academic audience.
I chose the most vulnerable persona as the primary one. I lead the interdisciplinary research team through a design exercise where we created a context scenario for our primary persona and extracted design requirements – what information would Matt need in order to begin understanding his funding portfolio and be productive at his new job? I generated early sketches based on our brainstorming. The general approach we took to knowledge mining and visualization is explained here and here.
I directed students as they started working on wireframes based on my sketches and coordinated the communication between the UX and technical teams. In the days before Slack, I used an internal team blog to track and communicate work.
I conducted early testing on the alpha version. My goal was to assess ease of learning. Could our users figure out how to use our Web application? Would they understand the interactive data visualizations and interpret them correctly? User feedback, documented in this report, included comments such as:
I feel this was designed for me!
This thing reads my mind!
We delivered DIA2 to the NSF and proceeded to focus on the external audience:
Phase 2: NSF external audience
The team identified STEM faculty members as the largest external audience. These are researchers who need to understand the funding portfolio in order to better target their proposals to the NSF.
I designed an interview protocol for intercept interviews we conducted at the annual meeting of the Association for Engineering Education, where we were most likely to encounter STEM researchers from various fields. I trained a number of graduate students and we all conducted interviews and collected the data we needed in 3 days.
I lead a cross-disciplinary team of graduate students from both the UX and technical teams through a 2-day affinity diagramming process, which resulted in one persona, documented in this report and this conference paper.
With an understanding of the second user group’s needs, I wanted to ensure DIA2 served them well. I lead the team through cognitive walkthrough exercises where we asked whether Dr. Anderson, our persona, would know what to do, and if he performed an action, whether he would know he was making progress towards his goals. I supervised one of my graduate students as she conducted usability testing with this new user group. This work resulted in a conference paper and her M.S. thesis.
DIA2 served about 2 million visitors in a 2 year period. About 2,000 users had created accounts. The project is over and the data is no longer updated, as is common with a lot of academic projects.
The research & design process, as well as technical aspects of DIA2 are presented in a paper we published in IEEE Transactions on Visualization and Computer Graphics. More research related to DIA2 is indexed on the project’s research page.
It feels like I just returned from the annual ASEE meeting. I presented a paper about a topic near and dear to my heart: the new undergraduate major in Human-Centered Design and Development (HCDD) I spearheaded at Purdue.
The paper tells the design story (birth story) of the new program. I took a user-centered approach to curriculum design, since that’s what I know best. I think one of the most valuable tools that came out of it was the vision persona. And, of course, the program itself. 🙂
The paper is available online (you can read it here) and the slides I used are below.
I am so pleased that we launched the redesign of DIA2 and the new homepage this weekend! It’s been a long and fun journey!
DIA2 is a Web application for knowledge mining and visualization of the NSF funding portfolio. Anyone can use it to explore where NSF funding goes, how it’s distributed geographically, across NSF divisions, across topics, and institutions. You can explore collaboration networks of researchers who worked together on proposals, identify who’s well connected in a field, and figure out what NSF programs and program managers have funded research similar to yours.
I’m happy to have been involved with DIA2 since the very beginning, as a co-Principal Investigator (co-PI). I led the UX team for the project. We started with user research to understand user needs, and moved through ideation, wireframing, testing, the whole 9 yards. It’s been very rewarding to hear users say, “This thing reads my mind!” and “I feel it was designed for ME!” Perhaps best of all, DIA2 gave me the opportunity to work with and mentor many talented students. All DIA2 “employees” have been students working under a PI’s supervision. I am so proud of them!
If you’d like to, go check DIA2 out for yourself – it’s available for all at DIA2.org.
Or, read some research papers about it:
Using visualization to derive insights from funding portfolios. In IEEE Computer Graphics and Applications, 2015.
DIA2: Web-based cyberinfrastructure for visual analysis of funding portfolios. In IEEE Transactions on Visualization and Computer Graphics, 2014.
Portfolio mining. In IEEE Computer, 2012.
I came across this article in HuffPo about a new app some students created that can help you identify your most toxic friends. They call it an art project, but I seem to recognize here a common structure for research projects in HCI. So, if you’re my student looking for thesis ideas, read on. 🙂
The recipe goes like this:
- Take a problem or issue from the social world (e.g. toxic friendships, collaboration, long-distance family relationships, etc.)
- Create a technology that mediates how people deal with that issue – ideally, the technology should improve the human condition or raise critical questions.
- Evaluate the technology.
- As a result/consequence of evaluating the technology, illuminate some aspect of and contribute knowledge to #1. And/Or, at the very least, derive design implications for this type of technology.
Some examples of papers following this structure:
I recently watched this TED talk by Daniel Kahneman about the experiencing self and the remembering self.
Apparently, they’re quite different. The experiencing self is the one who lives and feels in the moment. The remembering self is the one that engages in retrospective sense-making and decides, post-facto, whether the experience was good, fun, etc. It is the remembering self’s evaluation that informs future decision making.
This has enormous implications for UX evaluation. Even if the experiencing self has a (relatively) bad time, as Kahneman explains in the talk, but the remembering self makes a positive evaluation, the experience is remembered as good. We can measure UX in the moment, and track eye gaze and all that jazz. But ultimately, what really matters for future decisions is what users take away from the experience and how they evaluate it after it’s over. This is good news. It means that users may forget or put up with a few frustrations – and still assess the experience well, especially if it ends well. It also means that the research framework for website experience analysis that I created back in 2004 is valuable, because it focuses on how users make sense of the experience and what they take away.
I noticed that the Discussion chapter is one of the hardest to write, especially when you are so close to the results and your head is wrapped up in all the data. Writing the Discussion chapter requires taking a few big steps back and seeing the big picture. For that reason, I often write it with my eyes closed, without looking at the results. Or I ask students to imagine they ran into a friend or colleague at a coffee shop. They don’t have the manuscript or slides on them. They just need to explain to the colleague, without using numbers, or tables, or figures – just narrative – the following:
- what they did (briefly)
- what they found – what were the significant, memorable findings?
- what do the findings mean? – what does it mean that X was rated as 4.61 and Y was rated as 3.93?
- do the best of your knowledge, why do you think that is? what accounts for these results?
- why are the findings significant/important/useful? how can they be used, and who can use them?
This is the part where you sell your research. But then, a word of caution:
- what went wrong?
- what should we keep in mind as we buy into your findings? how do the limitations of your study affect the results? (this is, indeed, the Limitations section)
Think of the Discussion chapter as an executive summary. If it is the only thing I read, I should get a good understanding of what you found and why it matters. You should explain it to me clearly, in a narrative, without restating your results.
And now that we are so close, I might as well address the Conclusion chapter. It should accomplish 2 things:
- Summary of the entire project – this can be an extended abstract. What you set out to do (purpose of research), what you did (methods) and what you found out (main results).
- Directions for future research. I learned something great about this in a thesis defense yesterday. Think beyond replicating your study and overcoming your limitations. Think beyond better ways of addressing the same research questions. Now that we know what your research results are, what are other interesting questions we should address? What other issues and questions arise?
I’ve said this so many times in the past few weeks that I felt writing a blog post I can refer students to might be helpful. Please feel free to add your advice or questions in the comments below.