By Arundati Dandapani
On Friday, January 26, 2024, close on the heels of a very successful kickoff to the International Data Privacy Week’s celebrations by the Information Privacy Commissioner of Ontario Patricia Kosseim and her entire team and other celebrations by International Association of Privacy Professionals (IAPP) and ESOMAR, I was pleased to present our annual privacy webinar this time alongside Kuno Tucker, Chief Compliance Officer at Manulife Wealth, the top ten privacy insights for data leaders and the evolving role of human led collaboration as the tools, uses, applications and conversations of artificial intelligence (AI) and Generative AI only grow in our midst in new and disruptive ways.
As unveiled by the 2023 wave of the Generation1.ca Global Industry Skills Study, we know the twin problems of data deluge and data deficits have accelerated a need for data literacy in today’s workforce and will only continue to reshape how people work in and with their teams and with changing technologies. We emit over 300 million terabytes or 300 gigabytes (or 2.5 quintillion bytes) of data per day, and 181 trillion gigabytes (or 181 Zettabytes) of data will be generated in 2025, with videos accounting for over half of all internet data traffic, according to Big Data Analytics News 2024. Parallelly, a history of information gaps around missing birth and death records, or COVID health data and data deficits in Canada’s Indigenous economy that has hindered policymakers and Indigenous leaders to measure progress and make informed decisions, according to a 2023 report by the Bank of Canada, has created a need for more data literate talent infrastructure.
We are also becoming hyperaware of the power of artificial intelligence to help us wade through, organize and harness to better use the vast amounts of data in our custody. The fair information principles are a good starting point that inform our laws to ensure our work with personal data is compliant with basic principles around fair use whether in the private or public sectors, federally or across the provinces, and even internationally.
Top five privacy insights worrying data leaders:
- The explosive growth of AI-enabled technologies and the lack of knowledge around regulation and regulatory trends and technologies are creating both a digital divide and a learning curve for all those who are not experimenting with it prone to failure and obsolecence. The Executive Vice President of AI at MasterCard had once remarked that the “biggest risk of AI is of not using AI”.
- The synthetic data conundrum is an important issue in the privacy regime, particularly with reference to data quality and traceability, independently and when compared against de-identified and pseudonymized data. While data generated from reality is exposed to biases and information gaps, synthetic data is artificially generated and covers all possible permutations and combinations of circumstances, thus boosting the accuracy of AI in cases where training data might be limited. However, even synthetic data faces quality issues, and the lack of adequate verification procedures can expose it to privacy risks and also inferior data quality depending on what model was used to generate it. The other challenge is any negative perceptions of synthetic data across consumers, citizens, businesses, and society. Despite this, Gartner predicted that by 2024, synthetic data would overshadow real data in AI models.
- Privacy for children being an afterthought and not built into an internet that is catered to and designed for adults first. “Sharenting” is just one highly prevalent example and habit of parents sharing their children’s identities with the world before they are even born, from ultrasound X-Ray photos to other infant photos posted publicly without informed consent. By 2030, it is estimated that 66% of identity theft violations will be caused by sharenting.
- Who will be the AI leader of the 21st century and how? The EU, Canada, USA, UK, Singapore and other emerging yet technologically more curious markets competing for glory on that stage in different ways.
- How can we compete with the pace of Generative AI and other web3 and AI technologies from a skills, literacy, and labour market standpoint so that we fully harness the talent of current and future AI governance professionals?
Top five privacy insights creating optimism in data leaders:
- Privacy is growing, and so are the jobs. Particularly post the pandemic the growth of privacy violations have made this business and profession constantly in need of good and better talent to drive compliance with regulations and ethical and social responsibility across organizations and businesses handling personal data.
- AI is growing and so are the jobs. Again the vast growth in AI has expanded the need for AI literacy and AI Governance skills, signalled by the demand for new certifications like the IAPP’s AI Governance Certification and many other certifications by Google Cloud, Microsoft, and others on AI and Generative AI literacy, as well as various courses offered by Vector Institute of AI (Toronto), Amii (Edmonton), MILA (Montreal) in Canada, etc.
- Technology is growing, and so are the jobs. This isn’t new news, but the rise of sectors like e-commerce, renewable energy, clean tech, solar panel installation, machine learning, artificial intelligence, robotics engineering and web3 are all creating new opportunities for highly skilled data and AI literate talent.
- The unprecedented demand for new knowledge, cutting-edge methods, innovative infrastructure, and talented individuals is driving regulatory reform across the globe. This is evident in the ongoing consultations and revisions of legislation such as the Children’s Online Privacy Protection Act (COPPA) and the California Consumer Privacy Act (CCPA) in the United States, Bill C-27 in Canada, and the European Union’s AI Act in Europe, among others. These regulations are being refined and developed to ensure that the use of emerging technologies is fair, transparent, and accountable while promoting ethical and responsible innovation.
- Teams will continue to have to be inter-skilled across disciplines, cross-functional, multidimensional, reflecting the diverse data they work with and across to survive and thrive in an AI and privacy-enabled human-led and human-centred world.
Kuno Tucker followed my presentation with expanding more on the history and definition of AI and what makes its power so awe-inspiring and sometimes bone-chilling. He went on to discuss the importance of AI Governance and its safe and responsible implementation across organizations and sectors. His advice was that anything dealing with data will touched by AI in the future. The lack of knowledge around regulations and compliance procedures can lead to fines at best and reputation harms and loss of trust among customers and even potential lawsuits, even if a breach happened through third-party actors. This is why, having a good skilled and knowledgeable (well-trained) data governance and AI governance committee at the staff and board levels, puts you at a competitive edge in securing your safety protocols with your CCO/CTO/CPO/CISO all present at the table. Once you understand your data sources, tag and store PII separately, provide a framework of what data is permitted for AI use and establish allowed use cases, you should also create a reporting structure for when things go wrong. Finally, Kuno emphasized implementing Privacy by Design principles in your AI products, business, technology or workflow is the only right path for exercising responsible human-led AI systems and collaborations.
We took questions after our presentations, and wished everyone success with their data and AI strategies for the new year, and also reminded attendees to come out and support top talent at Generation1.ca’s Virtual Insights Career Fair and Case Competition on April 26, 2024.
You can sign up to my monthly e-mail newsletter Culturally Significant to access the privacy day celebration and related webinars in your inbox here or e-mail arundati@generation1.ca.

