Contextual inquiry to uncover AI implementation

I conducted contextual research with a team of researchers to observe AI implementation and troubleshooting in practice. Results showed early indicators of how Mendix should assist their clients with building their own AI-powered apps.

Gather early indicators of how enterprises implement AI in practice

stakeholders
Mendix AI team
Study type
Generative
Team
I designed and executed the research within a team of three UXRs
Timeframe
3 weeks
methods
Contextual inquiry
Tools
Confluence

Context

Mendix offers a low-code application (app) development platform to enterprises. Their core offering is their integrated development environment (IDE), but my research focused on user experiences in the online portal facilitating the IDE.

This project took place within a broader context of industry-wide experimentation and implementation of AI in practice. We learned from secondary research that generally, enterprises have a very positive view of generative artificial intelligence (GenAI), with most exploring it at the time of this research.

Challenge

We knew we needed to observe enterprise teams implementing their AI solutions and that getting access to these teams in this specific context would be difficult. We decided to take advantage of a two-day internal AI-themed hackathon to gather data while recruiting externally.

The goal of this study was to gather early indicators of 1) how enterprises implement and troubleshoot AI solutions in practice and 2) user experiences interacting with AI. Results from this study informed planning for primary research activities with Mendix customers. Specifically, we wanted to identify: experiences around ideation about AI, decision-making, how teams evaluate AI solutions, interactions, planning for end-users, and challenges.

Gather early indicators of how enterprises implement AI in practice

Research goal

‍Identify challenges with implementing AI-powered apps

Research objective example

Solution

We conducted a contextual inquiry, a type of ethnographic field study that involves in-depth observation and interviews of a small sample of users to gain a robust understanding of work practices and behaviors. This method is especially well-suited for understanding users’ interactions with complex systems and in-depth processes, as well as the point of view of expert users.

👍This method enabled us to uncover unknowns about the AI implementation process in-depth and from (relative) start to finish, so our data would be rich in detail and context.

👎The hackathon context can greatly differ from the context of implementing AI within an enterprise as part of day-to-day work. Participants may also alter their behavior as an effect of feeling observed.

We shadowed a team of five participants representing different roles, including team leads, software engineers, and designers. Everyone on this team had some previous experience using AI tools before this event. In addition, we observed hackathon participants who approached the hackathon coaches requesting help.

Observation lasted the duration of the event (2x 8-hour days), in addition to introductory and debrief interviews with the team we shadowed.

Tools

We collected, analyzed, and synthesized data in Confluence.

Screenshot of our field notes while observing the main team of participants.

Copy of my sketch of the field environment. We chose to make sketches rather than take photos to preserve participants' privacy during the event.

Findings

We uncovered several key behaviors which may be indicators of real-life AI implementations and interactions.

Briefly, behaviors included ways in which people:

  • Start AI projects with the technology in mind, and fit the UX to the technology later on
  • Use LLMs to generate new ideas
  • Make decisions in favor of ease, timeliness, and a compelling presentation, but which results in unanticipated consequences (e.g., defaulting to a gendered AI presentation)
  • Remain conscious of responsible AI practices but do not account for these in implementation
  • Grapple with the technology, defining the use case, and the role, value, and function of AI in the implementation

Impact

Results from this study informed planning for primary research activities with Mendix customers.