I conducted contextual research with a team of researchers to observe AI implementation and troubleshooting in practice. Results showed early indicators of how Mendix should assist their clients with building their own AI-powered apps.
Mendix offers a low-code application (app) development platform to enterprises. Their core offering is their integrated development environment (IDE), but my research focused on user experiences in the online portal facilitating the IDE.
This project took place within a broader context of industry-wide experimentation and implementation of AI in practice. We learned from secondary research that generally, enterprises have a very positive view of generative artificial intelligence (GenAI), with most exploring it at the time of this research.
We knew we needed to observe enterprise teams implementing their AI solutions and that getting access to these teams in this specific context would be difficult. We decided to take advantage of a two-day internal AI-themed hackathon to gather data while recruiting externally.
The goal of this study was to gather early indicators of 1) how enterprises implement and troubleshoot AI solutions in practice and 2) user experiences interacting with AI. Results from this study informed planning for primary research activities with Mendix customers. Specifically, we wanted to identify: experiences around ideation about AI, decision-making, how teams evaluate AI solutions, interactions, planning for end-users, and challenges.
Research goal
Research objective example
We conducted a contextual inquiry, a type of ethnographic field study that involves in-depth observation and interviews of a small sample of users to gain a robust understanding of work practices and behaviors. This method is especially well-suited for understanding users’ interactions with complex systems and in-depth processes, as well as the point of view of expert users.
👍This method enabled us to uncover unknowns about the AI implementation process in-depth and from (relative) start to finish, so our data would be rich in detail and context.
👎The hackathon context can greatly differ from the context of implementing AI within an enterprise as part of day-to-day work. Participants may also alter their behavior as an effect of feeling observed.
We shadowed a team of five participants representing different roles, including team leads, software engineers, and designers. Everyone on this team had some previous experience using AI tools before this event. In addition, we observed hackathon participants who approached the hackathon coaches requesting help.
Observation lasted the duration of the event (2x 8-hour days), in addition to introductory and debrief interviews with the team we shadowed.
We collected, analyzed, and synthesized data in Confluence.
Screenshot of our field notes while observing the main team of participants.
Copy of my sketch of the field environment. We chose to make sketches rather than take photos to preserve participants' privacy during the event.
We uncovered several key behaviors which may be indicators of real-life AI implementations and interactions.
Briefly, behaviors included ways in which people:
Results from this study informed planning for primary research activities with Mendix customers.