In March, I had the honour of being selected to speak at ESOMAR’s LATAM Congress, where I presented how methodological innovation can help shape a more responsible AI governance order across the data and insights ecosystem. My paper, embedded below in open access, explores both the promise and the cautions of using ethically governed synthetic datasets to better measure the opinions, aspirations, and future trajectories of hard-to-reach groups such as Latin American immigrants in North America.
Drawing on rich census microdata and Generation1.ca’s own community research, I discussed how privacy-protective research technologies can be used to augment real-world datasets and model future policy impacts with greater depth and foresight. In particular, I showed how combining anonymized census scaffolding with survey data and broader community insight can help us better understand immigrant retention, immigrant churn, and the structural conditions shaping both.
My analysis highlighted that social inclusion matters deeply: bias and discrimination weaken immigrants’ intentions to stay, while digital confidence, especially AI readiness, strengthens their long-term sense of possibility and belonging. These patterns were also consistent with what we have seen across other immigrant groups in our broader community research. I am grateful to Leonardo Valente and Agustín Elissondo for the opportunity to work with Livepanel’s In-Finite Express Mode to run these experiments and extend our survey research through machine learning models trained on both our own data and census microdata. In the absence of large prior user-profile datasets, this approach offered a powerful way to model future scenarios, surface hidden policy implications, and better reflect the realities of communities too often left under-measured.
I need to add, in the research world, I am constantly confronting individual, organizational, and systemic questions about AI governance tradeoffs. At CIPHER for example in late February, I explored these tensions in greater depth: privacy versus utility, where the risk of reidentification must be weighed against the value of richer insight; innovation versus regulation, where speed can outpace accountability; transparency versus security, where black-box methods complicate trust and scrutiny; and access versus control, where unclear hierarchies and weak role-based permissions can undermine data minimization and purpose limitation. The competition among state privacy laws, rather than the emergence of comprehensive federal privacy legislation, is one reflection of the costs of insufficient prioritization in how it can impact data and insight-dependent organizations. These are all not binary choices. They are strategic decisions that must be prioritized and calibrated carefully to maximize value for organizations, stakeholders, customers, communities and economies alike.









