Client
Lightful
Role
Head of Design
Direct Reports
4
End-to-end creation of a scale-up AI platform
Owning the vision & execution of a unique AI-powered suite of B2B marketing & fundraising tools
Owning the vision & execution of a unique AI-powered suite of B2B marketing & fundraising tools
Owning the vision & execution of a unique AI-powered suite of B2B marketing & fundraising tools
Visit Site
▷ Overview
Founding Desginer/Head of Design at Lightful
Grew the team to 5
Created set of AI-powered fundraising, learning and marketing tools
Users groups: marketers and fundraisers around the world
Articulated the product vision and design the end-to-end experience
▷ Overview
Founding Desginer/Head of Design at Lightful
Grew the team to 5
Created set of AI-powered fundraising, learning and marketing tools
Users groups: marketers and fundraisers around the world
Articulated the product vision and design the end-to-end experience
▷ Outcome
Built and iterated on 6+ full and distinct AI features for web and app
Contributed to increase use of platform by 3x
Increased NPS score by 30+ points
Outputs from these tools informed the inputs of other tools, multiplying the overal impact.
▷ Outcome
Built and iterated on 6+ full and distinct AI features for web and app
Contributed to increase use of platform by 3x
Increased NPS score by 30+ points
Outputs from these tools informed the inputs of other tools, multiplying the overal impact.

▷ The Challenges
Lightful is a tech-for-good company aimed at providing impactful fundraising and marketing tools for a variety of non-government, government and business users around the world. It was now looking at adding AI tools to its suite of tools. But its early platform lacked a vision on how to utilise AI for it's offering, though there was a strong sense it could help. In terms of users, they had a very wide range of abilities, especially when it came to AI powered tools. It wasn't clear precisely what tools we need to improve fundraising and marketing for non-profits and non-governmental organisations.
How could we translate ambiguous AI potential into a simple, high-impact product suite that empowered organisations regardless of their technical ability?
▷ The Challenges
Lightful is a tech-for-good company aimed at providing impactful fundraising and marketing tools for a variety of non-government, government and business users around the world. It was now looking at adding AI tools to its suite of tools. But its early platform lacked a vision on how to utilise AI for it's offering, though there was a strong sense it could help. In terms of users, they had a very wide range of abilities, especially when it came to AI powered tools. It wasn't clear precisely what tools we need to improve fundraising and marketing for non-profits and non-governmental organisations.
How could we translate ambiguous AI potential into a simple, high-impact product suite that empowered organisations regardless of their technical ability?

▷ Understanding what our users wanted
I planned a structured research program to determine what problems our users had and how they dealt with them. This included user interviews, who presented us with information we turned into jobs-to-be-done and high level personas. Understanding how users currently dealt with the problems involved comparing different competitors to our JBTD. We found that our users were unsure how to use AI, and wanted straightforward tools to build their confidence in social media. They just want to quickly generate content to publish that reflected their organisation tone, goals and audiences. Some of this information was contradictory (such as the ability levels of different audiences when it comes to AI), so we had to prioritise which persona we thought had the biggest impact.
Competitor Review + Personas + JTBD formed the basis for ideation.
▷ Understanding what our users wanted
I planned a structured research program to determine what problems our users had and how they dealt with them. This included user interviews, who presented us with information we turned into jobs-to-be-done and high level personas. Understanding how users currently dealt with the problems involved comparing different competitors to our JBTD. We found that our users were unsure how to use AI, and wanted straightforward tools to build their confidence in social media. They just want to quickly generate content to publish that reflected their organisation tone, goals and audiences. Some of this information was contradictory (such as the ability levels of different audiences when it comes to AI), so we had to prioritise which persona we thought had the biggest impact.
Competitor Review + Personas + JTBD formed the basis for ideation.

▷ Turning insights into action
We know what user's goals were. So how did we solve them? I led ideation sessions to address user goals and pain points with my team, the devs and product. I ran modified versions of "Crazy 8" workshops and getting participants to sketch out their ideas with wireframe components on miro. We prioritised the ideas based on impact and feasibility and stressed test them by thinking through use cases in workshops. I led the team to work with Bolt and Replit to vide-code the winning ideas to generate fast feedback on prototypes.
The core question was: how could we ensure these ideas provided maximum value to the user with minimum input from them?
▷ Turning insights into action
We know what user's goals were. So how did we solve them? I led ideation sessions to address user goals and pain points with my team, the devs and product. I ran modified versions of "Crazy 8" workshops and getting participants to sketch out their ideas with wireframe components on miro. We prioritised the ideas based on impact and feasibility and stressed test them by thinking through use cases in workshops. I led the team to work with Bolt and Replit to vide-code the winning ideas to generate fast feedback on prototypes.
The core question was: how could we ensure these ideas provided maximum value to the user with minimum input from them?

▷ Deciding what worked
There was some great ideas. There was the for a "Keystone Story" tool - which generated organisational narratives by scraping the web using only a user’s email address. Another tool, the "Persona Tool", helped users learn how to improve their personas, by give them curated feedback - what was missing, too vague, etc. But not all the ideas were great. We spent a good deal of time on a "Grant Tool", which was aimed to help user write grants. After iterating on it, it became clear to me that the scope and variables were too great for what we were trying to achieve. Too many types of grants, too many types of questions and too many desired outcomes mean that both our work and the user's interactions would the tool wouldn't be worth
This was an vital decision I proposed to the leadership - the grant tool didn't fit within the scope of our and user's needs - so we needed to pivot away from it.
▷ Deciding what worked
There was some great ideas. There was the for a "Keystone Story" tool - which generated organisational narratives by scraping the web using only a user’s email address. Another tool, the "Persona Tool", helped users learn how to improve their personas, by give them curated feedback - what was missing, too vague, etc. But not all the ideas were great. We spent a good deal of time on a "Grant Tool", which was aimed to help user write grants. After iterating on it, it became clear to me that the scope and variables were too great for what we were trying to achieve. Too many types of grants, too many types of questions and too many desired outcomes mean that both our work and the user's interactions would the tool wouldn't be worth
This was an vital decision I proposed to the leadership - the grant tool didn't fit within the scope of our and user's needs - so we needed to pivot away from it.

▷ Building with AI
AI, as it stood, was largely uncharted territory. So in parallel to idea generation I facilitated an initiative with the dev and partnership team to understand how we could use a user's assets that they had uploaded to our platform that might be used in any of our new tools. This was challenging as our wide array of users had wildly different assets. We found that we could use MCP servers to cross-pollenate content from other tools to generate useful outputs - such as using the personas created in the persona tool to feedback on campaigns created in the campaign tool Because we were working with AI, I spearheaded a "AI Design Pattern Library" and "Prompt Library" to ensure we were utilising AI in ways that were cutting edge and usable.
This was all done through an focussed initiative I facilitated, called "AI Squad" - consisting of weekly tasks and meetings.
▷ Building with AI
AI, as it stood, was largely uncharted territory. So in parallel to idea generation I facilitated an initiative with the dev and partnership team to understand how we could use a user's assets that they had uploaded to our platform that might be used in any of our new tools. This was challenging as our wide array of users had wildly different assets. We found that we could use MCP servers to cross-pollenate content from other tools to generate useful outputs - such as using the personas created in the persona tool to feedback on campaigns created in the campaign tool Because we were working with AI, I spearheaded a "AI Design Pattern Library" and "Prompt Library" to ensure we were utilising AI in ways that were cutting edge and usable.
This was all done through an focussed initiative I facilitated, called "AI Squad" - consisting of weekly tasks and meetings.

▷ The Solution
The tools that got the most positive feedback from the vibe coded prototypes were the ones we ran with (after iterating on them). We went with a Keystone Story Tool, and Campaign Tool. These were followed up with further AI powered tools - a Campaign Tool, Ethical Story Tool and Post Creation Tool. For each tool we ran hundreds of prompts tests - testing outputs with different inputs and prompts. This was very labourious but over time we sped it up through agent automations. These tools gave users what they wanted: quick results from minimal input, learning through quick feedback tools, and useful content generated from their pre-existing resources. They also provided 'leveling up' - with basic explanations for beginners and advanced features for experienced AI users.
Thinking holistically, I came up with one idea: the "Keystone Story Tool" could form the foundation that could then inform other AI assets for the user.
▷ The Solution
The tools that got the most positive feedback from the vibe coded prototypes were the ones we ran with (after iterating on them). We went with a Keystone Story Tool, and Campaign Tool. These were followed up with further AI powered tools - a Campaign Tool, Ethical Story Tool and Post Creation Tool. For each tool we ran hundreds of prompts tests - testing outputs with different inputs and prompts. This was very labourious but over time we sped it up through agent automations. These tools gave users what they wanted: quick results from minimal input, learning through quick feedback tools, and useful content generated from their pre-existing resources. They also provided 'leveling up' - with basic explanations for beginners and advanced features for experienced AI users.
Thinking holistically, I came up with one idea: the "Keystone Story Tool" could form the foundation that could then inform other AI assets for the user.
