We at Superbloom have dedicated ourselves to understanding and sharing how designers work and how design is practiced within open source teams and communities for years (see Open Source Design and our podcasts on Sustaining Open Source Design). With support from the Sloan Foundation, we had the opportunity to dig into these questions within academic and scientific communities in particular. We named this project “USER” — Usable Software Ecosystem Research – and defined it as a research initiative that explores how open source scientific and research software (SROSS) teams understand, consider, and undertake usability and design opportunities in their projects.
Our research followed an iterative Human-Centered Design approach using multiple methods of inquiry. Human-centered design (HCD) is a process centered around understanding people’s — specifically end users’ — needs, leveraging qualitative and quantitative research to generate actionable insights. Our primary activities were:
1. **Ecosystem analysis & Community Observation **
We spent the first two months of this research project analyzing data and information about the open source research software ecosystem to better understand existing resources, programs, and efforts to support research software for science. In this period, we reviewed relevant literature on design and usability in OSS, and on scientific OSS in particular.
We also began compiling information about projects and players in the ecosystem and began building what became our Ecosystem Map, which enabled us to understand the overlapping web of tools, projects, people, and funders that make up the SROSS space.
While the USER research team delved into the papers, blogs, repos and written accounts of key terms and subjects, we also made social connections with key SROSS funders, projects, institutions and individuals across conferences and community events like CZI Open Science and OpenRIT. We’ve learned from past design research projects in OSS that important context and nuance is revealed through relationship building, so establishing these connections contributed to our exploratory research practice. Developing an understanding of and learning to participate in the culture and community of OSS, not unlike how OSS itself gets built and maintained, comes from connecting with other people who value the work. From these connections, we built relationships that fed the research activities and also allowed our design researchers and the scientists, researchers and designers we met to speak on equitable, social terms and gather initial human-centered insights.
This process helped us describe and situate ourselves within this particular landscape: what projects currently exist, what types of funding are available to projects, what types of support and resources exist, what types of needs have been identified in the ecosystem, the language used by projects, the tools used in the ecosystem, and the metrics and measurements that may be in use by projects.
From the scan we were able to understand what questions we needed to ask in our subsequent surveys and interviews. It also helped us to develop an outreach plan and partner with ecosystem stakeholder organizations to publicize our survey and gather enthusiasm for our research.
2. Online Surveys (48 responses)
Our research and outreach at conferences helped to inform preliminary findings and hypotheses, creating the foundation for our published survey. We ran two versions of the survey (one longer version with both qualitative and quantitative questions, and one shorter version with only multiple-choice questions) to gather broad signals about how the community prioritizes, describes, and self-assesses on usability and design issues. We invited the project contributors we’d met in person and those we’d identified in our ecosystem research to participate, and shared in communication channels and email lists frequented by communities of interest. We received 48 total responses to our survey.
3. Interviews (27)
We completed 27 hour-long interviews for this project from December 2022 through February 2023. Our interview script contained 32 questions across 5 key themes (Users, Design, Project start & Institutional affiliation, Groups of people, contributors, Values & the Future). We used a semi-structured style, adapting the script to a given conversation with a participant. We spoke to a broad range of maintainers, developers, designers, scientists, researchers, funders, Open Source Program Offices (OSPOs) and stakeholders – all involved with creating and maintaining open source scientific & research software. The Interview Guide sought to gain further insight into:
- How norms in academic, science, and/or open source working environments affect the choices teams make around their users and different kinds of design interventions.
- How team dynamics and trust affects those choices.
- What teams would need to be interested in or able to prioritize usability and design in their work.
4. Data compilation and synthesis
Our six-person research team then collaborated virtually and in-person to organize, synthesize, and analyze the qualitative and quantitative data we’d collected through the surveys and interviews. We used a combination of methods to analyze our recorded interview transcripts, interview notes, and survey results, including:
- A collaborative Miro board where we practiced card sorting on a massive scale to organize key findings and quotes
- A qualitative code book in Notion that helped us to sort quotations from interview participants
- Qualitative and quantitative analysis of survey results in Google Sheets
Our research tried to cover a wide range of projects and people working on them. To understand what our results apply to, we want to speak to the research’s limitations.
Our research focused mainly on United States-based (or founded) projects with some European-based projects included. This has two reasons. Firstly, the organization that funded this research, the Sloan Foundation, focuses on research and academia in the US. Secondly, a lot of projects in open source research software are created at universities in the US or Europe, because these nations and states and their economies are typically able to fund such projects.
As a project concerned with both academia and the production of open source code, our sample also reflects the implicit prioritization of over-represented identities and backgrounds in regards to gender, race, and class and other factors in both academia and open source software communities.
Lastly, a limitation present in this research was time and capacity to include and recruit additional projects and people in the research process beyond those that volunteered through our initial outreach efforts. When making efforts to include under-represented majority/minority identities and populations (BIPOC, LGBTQIA+ etc.) time, capacity and voluntary self-identification is always a factor. We made efforts to signal that we wanted to speak to people and projects that were under-represented but due to time and capacity could not extend our deadlines to include those not readily available to participate.