Ethical AI: Triomphe-ant Insights from the CivicGrid Challenge

Congratulations to Generation1.ca’s Fall 2025 Silver Prize Winning Team — Triomphe — featuring Habiba Salaheldin, Rafee Walji, Lenaisa Redoble, and Jadyn Cuu.

Generation1.ca’s CEO caught up with the team to discuss their creative journey, experiences during the Case Competition, and their shared aspirations for the future. A special thank you to the Royal Ontario Museum, ESOMAR, Remitly, Vividata, Securian Canada, Humber Polytechnic, and all our valued partners for their generous contributions through the year in support of the competition and of top global talent.

Kudos!

“An engaging and well-coordinated presentation with smooth speaker transitions and confident audience interaction. The team demonstrated strong competence in addressing questions and maintaining a cohesive flow throughout. The introduction clearly defined the problem, followed by well-chosen case studies that provided valuable context—one illustrating failure (the U.S. justice system) and the other success (Estonia’s KrattAI). These examples effectively set the stage for the proposed solution.

The framework was clearly articulated, with a strong explanation of how it addressed the identified challenges. The inclusion of a launch roadmap added credibility and structure. The statement “trust is not an abstract concept” was particularly powerful, reinforcing the team’s focus on practical, actionable implementation.

While the trailer leaned toward a promotional tone, the presentation itself was thoughtful, well-crafted, and concluded with meaningful, actionable insights.”

Jury Comments on Team Triomphe

Could you start by sharing your career, educational, and professional background — and tell us what drives your interest in areas like AI ethics, data governance, or public innovation? What would your dream role look like in the years ahead?


Habiba Salaheldin: I grew up in Egypt but completed my undergraduate from the University of
Toronto in cognitive science, philosophy, and linguistics. I worked as an AI researcher for a
company in Egypt before working in government relations consulting. What drives my interest in
AI is its duality. It mirrors human thought yet exposes our biases. I’m fascinated by how
algorithms can both amplify and correct human limitations.

Rafee Walji: I graduated from Western in business management with a specialization in finance. My work experience has been working as a business development analyst. I am interested in AI because it’s a vastly growing industry and has completely changed the way we approach work. The world will look different in 5 years, and I want to be a part of that change.

Lenaisa Redoble: My degree was in political science with a minor in health sciences from Alberta, and then I did my postgraduate in research analysis in Ontario earlier this year. I’m currently in government relations consulting and research, but I’ve always been interested in the ethical implications of tech, particularly the digital divide and what that means for developing states. My major areas of interest have always been related to how we can bring about restorative justice to systems that have historically prejudiced marginalized groups. I believe AI is an excellent opportunity and can bring a new advantage in this sense. 

Jaydn Cuu: I graduated with a bachelors in nursing recently last year. I am currently a registered nurse, but while in school, I’ve worked as a tutor and piano instructor. My experience working in these areas that have such a focus on teaching, I see how much impact AI can have in shaping future generations’ education which drew my interest in learning more about the topic.

Understanding the CivicGrid Challenge: CivicGrid Alliance’s scenario involved fragile public trust and high expectations for transparency. What first struck you about this challenge, and how did your team decide where to focus your efforts in balancing innovation, ethics, and accountability?


Habiba Salaheldin: How fragile public trust becomes when technology outpaces regulation. From a technical lens, I realized that innovation means nothing without accountability. Our team decided to focus on building explainable, transparent models by  ensuring that every algorithmic decision could be traced, understood, and communicated clearly to citizens and policymakers alike.

Rafee Walji: What struck me first was how deeply public trust shapes the success of any AI governance model. As a team, we focused on building a roadmap that prioritized transparency and stakeholder engagement, ensuring our innovations didn’t just perform well but also inspired confidence and accountability among citizens.

Lenaisa Redoble: We act like AI is super new, and it is, but it also isn’t. The underlying ethical principles are still the same. Our case studies we showed in our presentation are a good example of this. Innovation is only one part of the story. While we innovate, we must also be mindful of mistakes made in the past, and try to learn from history rather than repeat it.

Jadyn Cuu: Innovation needs ethics in order to be successfully adopted into practice, or else the public will hesitate to trust. With the innovation of ai already showing how much potential it has in supplementing preexisting services, the next step is to ensure that ethics are applied to the novel tech.

The AI, Talent & Trust Stack: Your blueprint had to integrate global benchmarks like OECD, NIST, and ISO while keeping it human-centered. How did you approach defining CivicGrid’s ideal “AI, Talent & Trust Stack,” and what made your design both practical and visionary?

Habiba Salaheldin: When defining the stack, I approached it like building a resilient architecture. It was a process that involved layering data ethics, model transparency, and workforce capacity together. Our design was visionary because it didn’t just do the technical stuff, but also embedded accountability through governance structures as well. 

Rafee Walji: We focused on human-centric principles first, like the OECD’s. Our goal was to create a model that balanced governance rigor with flexibility. By integrating ethical AI design, workforce upskilling, and transparent data standards, our stack became both actionable for policymakers and aspirational for long-term digital trust.

Lenaisa Redoble: We wanted to pull from sources that were reputable and robust. We didn’t just want to pull random KPIs and benchmarks, I think anybody can do that. We focused on gathering benchmarks that could scale as well within Canada’s policy ecosystem. 

Jadyn: Using an evidence based approach, building off of preexisting stacks and finding where there’s gaps that need to be addressed.

The Big Insight:  What was the most powerful insight or turning point your team experienced during the competition — something that fundamentally changed how you view responsible AI or public-sector innovation?

Habiba Salaheldin: The most powerful insight for me was realizing that “responsible AI” is not a static checklist, but a living system. Responsible innovation must be iterative, self-correcting, and grounded in humility. True progress happens when technology and ethics evolve together.

Rafee Walji: For me, the turning point was realizing that responsible AI isn’t just about compliance, it’s about culture. Seeing how other teams approached governance through purely technical lenses made me appreciate how much trust depends on communication and design. Our model reframed AI as a social contract between institutions and citizens, not just a technological framework.

Lenaisa Redoble: I think it was when we came up with the AI good governance model. When we think of AI, we immediately think of numbers or robots instead of politics and people. The institutions that shape discourse are just as important as the discourse itself. When we create good institutions that value free and fair discussion, we can have good conversations and make good rules. 

Jadyn: It’s always great to see how everyone else takes on the same prompts, and see how that compares and contrasts to our own team’s presentation. Our take was very different from some of the other competitors, and that was very eye opening for me.

The Launch Narrative & Creative Storytelling: The board wanted an optimistic, human-centered launch story inspired by Claude’s “Keep Thinking” campaign. How did your team craft a narrative that made people believe in AI as a force for good — “AI that helps people think further”?

Habiba Salaheldin: We really wanted to humanize the technical side of AI. The translation piece was critical. You can have the best idea in the world, but if it’s not presented in a way that is digestible for people to use, then your idea is worth nothing. I think that’s why the AI Good Governance model works so well, it’s something we’ve seen before and mirrors our current government systems. AI shouldn’t be scary, it’s actually a lot easier to understand than we think.

Rafee Walji: We framed our story around optimism grounded in accountability, by positioning AI not as a replacement for human thought, but as a catalyst for it. I focused on the roadmap’s storytelling arc, connecting real-world examples to everyday challenges. Our message was that with transparency and inclusion, AI can empower people to think further, collaborate better, and trust deeper.

Lenaisa Redoble: Our biggest focus was making sure that the judges understood that as long as the appropriate safeguards are in place, AI is a wonderful opportunity to challenge our assumptions. In our stack, we talked a lot about minority groups who typically get excluded from these circles. Our approach was to question everything we fundamentally knew. How can AI change this, but for the better?

Jadyn Cuu: We wanted to use case studies showing how AI can be helpful, while also bringing to light unsuccessful case studies and what we can learn from it, to develop a more responsible AI future.  

Teamwork & Leadership in Action: What moments of debate, collaboration, or uncertainty most shaped your process? How did your team navigate differences in opinion while staying aligned on the vision for trust and transparency?

Habiba Salaheldin: We never really had any big issues in terms of differences. It was definitely overwhelming at one point, since there was so much information and research to go through in such a short amount of time, but we quickly saw the common theme through everything: AI is smart but it’s not nearly smart enough. Humans have to enable it to its fullest potential before it can really start empowering us. Once we understood that, it was easier to approach our research in a targeted way. 

Rafee Walji: One of the most defining moments was when we discussed which specific benchmarks to pull. Some of us wanted specific international models, while some wanted to focus on Canadian models. We found balance through open dialogue and respect for each member’s expertise. At the end of the day, we were all trying to communicate the same thing. 

Lenaisa Redoble: We all have different backgrounds and experiences. Some of us never really gave much thought to AI before this competition, while some of us have worked and researched it extensively. Regardless, we came together really well and complemented each other’s strengths. We all had the same vision, and I think that’s the case for most people our age– that we love the idea of AI and think it’s a promising tool. Once we communicated our ideas clearly, the rest of our success was owed to tasking each other with parts that highlighted our unique skillset.  


Jadyn Cuu: Mutual respect and trust in the expertise of other group members. We had open and collaborative discussion if there was a difference in opinions, but overall, our unique experiences and education shaped our team’s outcome.

Impact on Your Future Thinking: How has working on the CivicGrid challenge influenced your understanding of how governments and organizations can use AI responsibly — and what skills or mindsets do you think are now most essential for professionals in this space?

Habiba Salaheldin: This challenge changed how I view AI governance. It’s not just about coding or compliance, but designing systems that reflect societal values. I’ve learned that professionals in this space need both technical literacy and ethical reflexivity. The ability to bridge disciplines, question assumptions, and prioritize human impact over efficiency will define the next generation of responsible innovators.

Rafee Walji: The CivicGrid challenge showed me that responsible AI governance is as much about adaptability as it is about ethics. Governments and organizations need frameworks that evolve with technology and society. For professionals, the most important skills are systems thinking, empathy, and interdisciplinary collaboration. It is beyond just understanding how AI works, but how it shapes trust, power, and public good.

Lenaisa Redoble: While doing the research for the CivicGrid challenge, I learned a lot about the current reports and organizations centred around the responsible use of AI. It’s not as developed as I thought it was, and white the frameworks are robust, a lot of it seems arbitrary at times. While one body will prioritize one measurement, the other won’t even mention it. Even seasoned professionals working in esteemed regulatory bodies are still learning and going through trial and error in their frameworks. With this in mind, professionals must stay curious and question everything. Nothing is set in stone, and if you believe you can make a positive change, your words and impact are just as important as somebody who has been in the field for 20+ years. We’re all still learning and the field is still developing. 

Jadyn Cuu: This challenge has helped me understand AI beyond a tool, but as a mechanism that shapes society. I believe that professionals in this space should keep an honest and accountable mindset, with the goal of serving the people.

Leave a Reply