Jonathan Letourneau, MDesMDes: Master of Design Studies
I build and optimize UX capabilities for agile innovation teams within large organizations, leveraging UX research methodologies
to inform service delivery decisions for generative AI and emerging technologies.
Innovation Analyst | Mass General Brigham Emerging Technologies and Solutions (MGBETS)
UX Researcher | Dana-Farber Cancer Institute
Information Designer | Self-employed
Metaverse Curation and Relations Strategist | American Medical Extended Reality Association (AMXRA)
Updated October 5th, 2024 These projects are in progress, some details and results are not publicly sharable.
Statements tagged with “*” are left intentionally vague to due to sensitive and proprietary nature of ongoing projects. I innovate by challenging the status quo, charting a path from ambiguity to solutions. On the Mass General Brigham Emerging Technologies and Solutions (MGBETS) team, I help support the adoption and evaluation of generative AI applications for an enterprise of over 100,000 employees. I support and manage pilots of 1,300+ employees and am responsible for the vetting of digital health startups with generative AI technology.
1. Implementing and Evaluating Microsoft 365 Copilot for Administrative Staff
Role:Project Lead, User ResearcherObjective: Evaluate the effectiveness of generative AI tools for administrative employees. Use the data gathered to inform a decision to purchase or not to purchase for the organization (up to 100,000 employees).
Overview: Allocated and onboarded 300+ users to Copilot, working closely with the technical team to resolve the technical complexities of the rollout due to Microsoft’s enterprise-level app update permissioning.
Community Building and Upskilling: Established an active community of more than 300 members on Microsoft Teams.
Developed a custom training approach with weekly educational content, interactive training sessions, and 1:1s to provide tailored support.
User Experience Research: Implemented a longitudinal research approach to collect feedback throughout the pilot. Frequent feedback is imperative for generative AI tools because of the rapid pace of feature deployment.
*Custom Data Dashboards: Collaborated with a technical analyst to build PowerBI dashboards, providing leadership with highly specific and contextual insights to Copilot Adoption, parsing daily user activity with features used.
*Initial Results and Impact: Generative AI tools for business use cases are currently most effective at summarizing content, not generating it. For example, when drafting an email in Outlook with Copilot, the generated text is too generic and is often discarded. Meanwhile, summarizing meetings is the pilot users’ favorite feature.
Learnings: Within large organizations, even a small pilot require complex technical planning and focused collaboration between multiple stakeholder groups. Furthermore, It is imperative to create a pilot that accurately reflects employees across the organization – this will inform which employees in the organization are most likely to benefit from a license.
2. Leading User Research on the Adoption of Generative AI Tools for Healthcare Providers
Role:User Research Lead Objective: Evaluate the potential generative AI solutions have to reduce healthcare provider burnout and identify which clinical setting is most able to adopt this technology in its current state. Overview:
Conducted 60+ interviews with healthcare practitioners assess the adoption and integration of generative AI in clinical workflows. Following the research, I helped train and lead two colleagues in analyzing the data and build personas and journey maps.
User Research Process Design and Approach
Building Internal Capacity and Framework for UX Research Methodology: As the sole designer on the emerging technologies and solutions team, this project is designed to inform my team members the
best practices for conducting in depth user interviews and analysis. Several key facets are: 1) Co-creating the research objectives and interview guide, 2) Sharing interview best practices, 3) Analyzing and affinity mapping research notes, 4) Demonstrating the value of persona generation.
Explored Opportunities for Generative AI-Augmented Research Analysis: Throughout the research process, I explored the capabilities of large language models for analyzing research, comparing my analysis skills to AI-processed notes.
*Constructing “Evolving” Personas: To offset the research analysis timeline with the rapid deployment of new features, our team is proposing a new type of persona - an “evolving” persona. This evaluative persona accounts for negative sentiments around missing product features and charts a feature prioritization list, with one key feature that is barring adoption highlighted. This communicates to leadership how to coordinate product maturity with user adoption.
Work in Progress: 1) Scheduling workshops to explore provider workflows,
and conducting 20+ additional interviews with non-users. 2) Publish research findings in an academic journal article.
Learnings: 1) Adopting generative AI solutions is a very personal decision. Expectations, habits, and workflow each play a key role in adopting AI technology. 2) Always take the time to clearly communicate expectations with your team members. To support an academic paper, our team had to document and follow a clear process for research analysis.
3. Consulting on Large Language Models (LLMs) for Data Analysis
Role:Internal LLM Consultant and Educator Objective: Assist world-leading healthcare researchers and data analysts in adopting LLM workflows for data analysis. *Overview: Ahead of an enterprise-wide rollout of LLMs, I worked 1:1 with healthcare innovators to explore how they can augment their data analysis with an LLM-first approach. To adopt this role, I proactively upskilled myself on Microsoft’s Azure OpenAI platform and prompt engineering.
*Optimizing Large Language Models (LLMs) for Custom Use Cases: Assisted with prompt engineering, data cleansing, and model parameter optimization to maximize output quality and relevance for each domain and use case.
*Upcoming Work: Will act as a mentor to support the first wave of longitudinal projects that leverage LLMs.
Learnings: 1) Have a clear picture of the desired output. When prompting LLMs, constructing a prompt that clearly articulates the output is just as important as phrasing the query. 2) Balance the tokens in the system message and prompt – this will help reduce costs when conversing over long messages.