There is much excitement and wonder always around the types of AI driven research in various marketplaces and industries but also exasperation around the pace of uncertainty in regulation. Generation1.ca’s 2025 International Data Privacy Day celebration in the last week of January, was a powerful opportunity to highlight the intersection of privacy, AI governance, and social progress. Arundati Dandapani, along with fellow international experts like Michalis Michael and Marco Cantin (video at the end of this article), delved into the crucial role of data ethics and privacy laws in shaping inclusive, equitable environments for global businesses and societies. This discussion, featuring global perspectives from Canada and the UK, emphasized how data protection and AI governance impact not only compliance but also ethical responsibility, especially in contexts like social media analytics, AI-powered decision-making, and data portability.
AI guardrails refer to the safeguards that ensure AI systems align with ethical standards, privacy regulations, and organizational values, aiming to mitigate risks such as biases, harms, and misuse. AI models can be influenced by data biases, including selection, confirmation, and measurement biases, which can result in unfair or discriminatory outcomes if not addressed during data collection and training (Chapman University, 2023). The quality, diversity, and representativeness of training data are crucial, as poor data quality can distort predictions and amplify societal biases. Ensuring responsible AI deployment requires comprehensive data validation, transparency in data processes, and continuous monitoring for biases (Scientific Publishing, 2023; IBM, 2023). Additionally, guardrails help protect against harmful AI misuse, such as adversarial attacks or the creation of misleading content (e.g. misinformation which we go on to discuss in the webinar) and ensure compliance with privacy and security standards (Google Cloud, 2023). AI guardrails can be categorized into privacy, security, regulatory compliance, and ethical fairness, ensuring that AI systems align with organizational values and minimize risks. By incorporating these guardrails, AI developers and organizations can uphold their responsibility to create fair, ethical, and transparent AI systems that minimize risks and maximize societal benefits (McKinsey & Company, 2023). Here are some 20 questions / guidelines that ESOMAR put together for those looking to commission AI based market research services from AI-based research providers, premised on aspects like ethics, transparency, data handling, and model performance to ensure responsible and effective AI adoption (ESOMAR 2023).
Given the work of these experts, always in pursuit of truth, the conversation frequently veered to the pervasive issue of misinformation which is widely spread across public online platforms, significantly influencing consumer behavior and perception. However, beyond its impact on consumer decisions, misinformation also presents a serious threat to public safety, with its degree of danger varying depending on the context. While misinformation may not directly fall under privacy regulations, it undermines privacy protections and weakens the guardrails for training data. This is especially critical in sensitive areas such as elections and citizen services, where the integrity of information is essential for democratic processes and societal trust.

Chaired by ESOMAR President Ray Poynter (far right).
The celebration underscored Generation1.ca’s commitment to fostering transparency and ethical standards in the digital age, particularly for immigrant communities in North America. With increasing global reliance on data-driven technologies, Arundati emphasized the significance of building trust through ethical AI practices and legal compliance. The event also addressed the challenges of navigating diverse regulatory environments, such as the GDPR in Europe and emerging data laws in Canada and the US, highlighting how companies must adopt the strictest standards to ensure cross-border privacy protection and secure data management for all stakeholders. Referencing the importance of the ESOMAR Code of Conduct, Michalis Michael stated, “If your compass and your guiding principle is around morality and doing the right thing, you’ll be pretty safe from violating any changes in the law, even if you don’t know about them.” This sentiment echoed the ongoing debate about the importance of regulated versus self-regulated industries, where robust legal frameworks are often seen as essential for ensuring ethical conduct and safeguarding privacy. The conversation also touched on the growing concern about ballot fraud, where AI and digital identification could play a significant role in detecting anomalies and protecting the integrity of elections.
If your compass and your guiding principle is around morality and doing the right thing, you’ll be pretty safe from violating any changes in the law, even if you don’t know about them.
Michelis Michael, CEO and Founder, Listening 247
This year’s International Data Privacy Day event was not just about legalities—it was about making ethical, transparent decisions in a rapidly changing technological landscape. For Generation1.ca, this celebration aligns with their mission to empower immigrants with data literacy and AI leadership skills, ensuring they are equipped to navigate the future’s digital challenges. Through these discussions, the platform aims to support newcomers in understanding the evolving privacy landscape and its impact on their professional growth, ensuring they are well-prepared for future opportunities in a connected world.
Marco Cantin added, “To ensure we protect individual rights, we need more than just good intentions; we need robust regulation that holds all players accountable in an ever-evolving technological environment.” Arundati Dandapani concluded, “Opportunities like these provide essential context for our communities, ensuring they are not only aware of the changes in privacy and AI, but also equipped to leverage them for success in their new professional landscapes.”
To ensure we protect individual rights, we need more than just good intentions; we need robust regulation that holds all players accountable in an ever-evolving technological environment.
Marco Cantin, Deputy Chief Privacy Officer, GFT Technologies
This year’s International Data Privacy Day event was not just about legalities—it was about making ethical, transparent decisions in a rapidly changing technological landscape. For Generation1.ca, this celebration aligns with our mission to empower immigrants with data literacy and AI leadership skills, ensuring they are equipped to navigate the future’s digital challenges. Through these discussions, the platform aims to support newcomers in understanding the evolving privacy landscape and its impact on their professional growth, ensuring they are well-prepared for future opportunities in a connected world. Marco Cantin added, “To ensure we protect individual rights, we need more than just good intentions; we need robust regulation that holds all players accountable in an ever-evolving technological environment.” Arundati Dandapani concluded, “Opportunities like these provide essential context for our communities, ensuring they are not only aware of the changes in privacy and AI, but also equipped to leverage them for success in their new professional landscapes.”
Watch the video below!




