Five Ways to Think about AI Literacy

By Arundati Dandapani

Diverse learners and leaders often ask me how much they need to know about AI to succeed in data and insights careers. In a global industry that is now more technology-enabled (39%) than it is reliant on established research methods (36%), from an annual turnover and revenues standpoint according to ESOMAR’s 2023 data1, it can be hard to know how much more to learn about or use AI for research. Whether you start with trends or with regulations, it doesn’t matter, so long as you start somewhere, because you’ll need a strong understanding of both. If we ask for transparency from our partners and suppliers, we need the knowledge to be equally transparent about our tech stack. This helps build trust and improves accountability throughout our work and ecosystems. 

While machine learning, AI and deep learning remain the top three AI skills being used worldwide in the past year, based on data from LinkedIn published by OECD.AI2, it’s the AI literacy component that will boost the researcher’s skillset. I suggest five ways for learners and leaders to upskill with AI literacy and knowledge competencies for a fast-changing data and insights industry and marketplace.  

  1. Understand the AI trends, techniques and technologies and their business use cases 

According to qualitative interviews with industry leaders for the Global Industry Skills Study, AI and Generative AI in North America and the Western Hemisphere had a hugely foreseeable impact on our skills of the future3. AI’s biggest gift would be the efficiency it enables in powering large volumes of tasks and activities that frees up human load. Hope and caution was palpable in presentations at a recent IAPP Global AI governance conference with varied business cases. “The impact of AI is going to be bigger than the impact of the internet and of computers” and the “biggest risk of AI is of not using AI,” and we need to run with this technology with “as many guardrails” as you can put (Chauhan, 2023)4

For a healthcare company like Pfizer, what’s different today from past years of using AI is that “AI is finally scalable,” and that it is visioned to be “every employee’s copilot” opening new possibilities with mass analysis, drug discovery, and exponentially transforming business with hyper-personalized marketing messages and improved output (Von Kirchbach, 2023)5, all when the “cost of prediction has come down to zero” with AI for companies like MasterCard (Chauhan, 2023). An AI challenge for MasterCard, for example, is providing transparency on algorithms to customers while making sure the transparency does not make the systems vulnerable to attacks and fraud. Similarly, testing for bias with credit card numbers was cited as a challenge without linking these cards to sensitive data like customer attributes, behaviours and habits (Tsormpatzoudi, 20236). At the same conference, Kevin Roose, an author and technology reporter discussed how Bing chat serenaded him with a shockingly invasive secret. Roose described the human-AI relationship ethic best with the advice that you should “outsource your chores to AI, but not your choices”7

AI governance is a global cross-disciplinary endeavour requiring a wide range of skillsets. Should AI follow human instructions or human intent? Having a “clear vision in uncertain times” is essential, as we prepare to meet new frontiers of AI with “digital humanism” (Cervara-Nevas, 2023)8. Anthropic, Deep Mind and Open AI are the top three AI leaders that declare they will become superintelligent within the next ten years, with superintelligence defined as intelligence that’s even more powerful than today’s artificial general intelligence (Kutterer, 2023)9. I lead on the governing board of the IAPP’s ANSI and ISO-accredited credentials and certifications advisory board including its all-new AI governance certification. The IAPP recently ran its AI Governance Global Conference in Boston, and I am constantly amazed and inspired by the advancements and talent powering our evolving global data and insights stage.  

The three types of neural networks or machine learning that help scale analytics, computing and business operations are unsupervised learning, supervised learning and reinforcement learning have challenged human learning in shocking ways, forcing us to draw on better data and regulatory frameworks for system prompts to create “nutritional labels” for LLMs that clearly state how, where and why certain data or responses emanate, and other practices respecting our human boundary of mind and freedom of mind at once (Zittrain, 2023)10. AI complexity thus begs us to make technology more explainable and user-centric, even as we’ll need better skills in programming (R and Python), math and statistics (numeracy), data analysis and consequently problem-solving – the same skills that were also found valuable to employers surveyed in the Global Industry Skills Study

  1. Understand AI principles, frameworks and regulations that complement or disrupt each other 

The ability to read quickly, synthesize large legal documents and data to make concise recommendations while translating core values to the law and business responsibilities as they meet objectives across organizations is essential to interpreting legislation and its strategic impact.

In Canada, we need to know current laws, but also understand upcoming privacy reform with Bill C-27 and its resulting Artificial Intelligence and Data Act (AIDA) when it becomes the law. For example, the federal privacy regulator (privacy commissioner) of Canada, under the current privacy regime does not have any order-making powers, but he can receive complaints, investigate and make recommendations, as he and his office are actively in the process of investigating the ethical practices of OpenAI jointly with provincial authorities11. However, once Bill C-27 becomes the law in Canada, the OPC’s powers will acquire enforcement capabilities12In the US, the White House executive orders on AI safety has also opened new realms of opportunity for those looking to shape the economy positively.

 Understanding the OECD framework for responsible AI, US’ NIST, ISO, the Georgetown Taxonomy of AI Risks and Harms and the EU’s AI Act, ENISA, JRC can help professionals envision better how global collaboration and cooperation shape policy and guidance standards in a competitive marketplace where citizen trust and consumer trust are built on the ethical AI practices of data collecting organizations. 

  1. Understand the human responsibility in ethically designing, using and implementing AI

Data and insights professionals know that bias remains the dominant concern with AI systems and models. Privacy, transparency, fair and responsible use, are some others. AI governance (AIG) is ultimately a business function, and it’s about being more than just compliant, even as AIG should be co-owned by business and compliance (Ettinger, 2023)13. AI governance and leadership is about leaning into data literacy, and also data protection and risk management, and more technological and technical understanding skills (Hirsch, 2023)14. Honing data literacy skills is a big part of meeting this challenge, and something we dissect and highlight regularly through my courses and recurring career fairs. 

  1. Understand the limitations of human beings to upskill for the future  

What is life? Where all your dreams come true? If we go about defining life to AI as the destination where all your dreams can come true (or to life), we begin to forget the limitations of human beings. The truth is, we grow into life with our biases, and understanding our limitations is useful for human and AI-powered teams. Does AI know our biases? Not yet, but we might train it to learn in the future. Embedding responsible AI into all your company’s data practices will be a process and skill to master for AI competitiveness. 

Documenting (codifying) standard operating procedures will continue to be important for monitoring, mapping and measuring your learnings and good practices (Chang, 2023)15. For a world drowning in data, where roughly only 0.005% of the population is an active researcher (ESOMAR, 2023)16, a significant limitation of human beings is probably being understaffed (or even under-skilled) to deal with the challenges and opportunities of upcoming AI, given the massive rates of data we emit each year and the vast labour market shortages with respect to data talent (Hardie, 2023)17. I have discussed human skills for the future in recent podcasts with QuestionProInfotools, and  ESOMAR.

  1. Understand the limitations of all technologies including AI

The human skills specific to public services, like in education, government, healthcare, will all perhaps raise the premium or wage on specialist skills exclusive to humans. Pain, hunger and ambition are feelings that only humans experience and can respond to, and so is our ability to grow from them. If using more AI is moving humans up Maslow’s hierarchy of needs, so that we all achieve self-actualization more quickly, in an ideal world, will all AI remain less evolved than human beings? And then, how long before our emotions are deeply learned, mimicked and manipulated? We already know that scientists and technocrats are preparing for the age of superintelligence.  

AI, like all technologies, has limitations (e.g., unverified data quality, limited interpretability, lack of critical thinking, emotion or sentience) that need to be fully considered. However, just like people, different AI models have different strengths and weaknesses depending on how they were trained (e.g. some may be better at pattern recognition and others at natural language processing and so on). These limitations of AI will help humans specialize and excel in the different domains needed for successful collaborations. 

The World Economic Forum’s survey of businesses for top skill priorities for the 2023-27 workforce included analytical thinking, creativity, and big data and AI as the top three skills in that order, followed by leadership and social influence, resilience flexibility and agility, curiosity and lifelong learning, design and user experience, motivation and self-awareness, and empathy and active listening. Of these, cognitive skills are the fastest growing including the top three of creativity, analytical thinking and technological literacy18. Can’t wait to share the findings of the next wave of’s Global Industry Skills Study that sheds light on more technological and data skills needed for success. Further, I  invite you to consider joining our upcoming spring 2024 virtual insights career fair and case competition to upskill, network and hire or support top global data and insights talent across North America. 

A group of books on a black background

Description automatically generated

Arundati Dandapani, MLitt, CAIP, CIPP/C, is the Founder and CEO of, Professor of Data, Analytics and Insights at Humber College’s RAP program and the Longo School of Business, Vice Chair of the program advisory board at Algonquin College’s marketing research and analysis program and holds other board and leadership roles. She was also named among ESOMAR and Insight250’s Top 75 Global Data and Insights Legends at ESOMAR’s 2023 Annual Conference in Amsterdam. Find her on LinkedIn or Twitter.

Footnoted References

  1. (2022). Global Market Research Report. ESOMAR. ↩︎
  2. OECD.AI (2023), visualisations powered by JSI using data from LinkedIn, accessed on 14/11/2023, ↩︎
  3. Dandapani, A. (2023, May 18). Fighting the Data Deluge. ↩︎
  4. Chauhan, R. (2023, November 3). AI Leadership in Action (AI Governance Global, an IAPP event 2023). IAPP. Retrieved November 10, 2023, from ↩︎
  5. Von Kirchbach, J. (2023, November 3). AI Leadership in Action (AI Governance Global, an IAPP event 2023). IAPP. Retrieved November 10, 2023, from ↩︎
  6. Tsormpatzoudi, P. (2023, November 3). AI Leadership in Action (AI Governance Global, an IAPP event 2023). IAPP. Retrieved November 10, 2023, from
  7. Roose, K. (2023, November 3). Keynote: Kevin Roose, bestselling author of ‘Futureproof,’ award-winning technology columnist, The New York Times (AI Governance Global, an IAPP event 2023). IAPP. Retrieved November 10, 2023, from
  8. Cervara-Nevas, L. (2023, November 3). The Alignment Problem in AI (AI Governance Global, an IAPP event 2023). IAPP. Retrieved November 10, 2023, from
  9. Kutterer, C. (2023, November 3). The Alignment Problem in AI (AI Governance Global, an IAPP event 2023). IAPP. Retrieved November 10, 2023, from
  10. Zittrain, J. (2023, November 3). Keynote: Jonathan Zittrain, Harvard Law School professor; co-founder, Faculty Director, Berkman Klein Center for Internet & Society (AI Governance Global, an IAPP event 2023). Retrieved November 10, 2023, from
  11. The Office of the Federal Privacy Commissioner (2023, May 25). Announcement: OPC to investigate ChatGPT jointly with provincial privacy authorities. OPC. Retrieved November 10, 2023, from
  12.  IAPP (2023, November 3). Regulating AI (AI Governance Global, an IAPP event 2023). Retrieved November 10, 2023, from
  13. Ettinger, P. (2023, November 3). AI Leadership in Action (AI Governance Global, an IAPP event 2023) [Conference Presentation]. IAPP.
  14. Hirsch, D. (2023, November 3). AI Leadership in Action (AI Governance Global, an IAPP event 2023) [Conference Presentation]. IAPP.
  15. Chang, S. (2023, November 3). Responsible AI (AI Governance Global, an IAPP event 2023) [Conference Presentation]. IAPP.
  16. same as 1 –  calculation based on estimates taken from ESOMAR data ↩︎
  17. Hardie, K. (2023, September 3). Addressing the Talent Gaps in Data. Retrieved November 10, 2023, from ↩︎
  18. World Economic Forum (2023, April 30). The Future of Jobs Report 2023. Retrieved November 10, 2024, from

Leave a Reply