
Ethical codes of conduct and frameworks for data, analytics, and insights focus on ensuring the responsible collection, analysis, and application of data while safeguarding privacy, transparency, and fairness. These frameworks emphasize principles such as data privacy, informed consent, non-discrimination, and accountability. They also advocate for the ethical use of AI and machine learning, ensuring that algorithms do not perpetuate bias or inequality. Additionally, ethical guidelines encourage stakeholders to prioritize inclusivity, sustainability, and social impact, ensuring that data-driven decisions benefit society as a whole. Key frameworks include privacy laws like the GDPR and other national, provincial, and regional laws, along with industry-specific standards that guide practitioners in maintaining integrity and trust. In Germany, the code of conduct differs from the global ICC code of conduct in that even when respondents have opted out of anonymity as an option, you can’t link to any of their information or PII.
Codes of conduct are formulated around principles of fairness, guiding industry participants. In my classroom for my research in society course, I facilitate a codes of conduct exercise each semester as a group project, where students team up to present and critique the codes of conduct of associations from around the world in how these codes boost or hinder an association’s brand in its market context and relevance. This exercise, which brings me much joy, invites students to explore and think about the membership structures, values, viability and brand propositions, advocacy research, business interests, competition and roles of these associations in their respective markets and beyond. Over time, we have seen a diverse mix of associations, from traditional research bodies to more niche ones that might be healthcare focused or people analytics / HR research or behavioural-science or focused on a process within research (e.g. sampling or report writing), including data science associations, and even marketing associations a form part of these comparisons within the case studies.
The exercise often sparks curiosity, puzzlement, or criticism, particularly regarding self-regulation. Students from regulated industries like dentistry, accounting, and nursing often find it difficult to grasp what a self-regulated industry looks like, especially when it comes to the consequences for bad actors. We also explore issues such as AI guardrails, with organizations struggling to implement rules around anonymization, deidentification, and defining unacceptable harms but also unacceptable technologies where there is no way of knowing if the right third-party / independent audits have been made on their implementation. Additionally, we address concerns about protecting youth and vulnerable populations, as well as tackling power imbalances in industries where established players may have an unfair, even monopolistic, edge.


Without naming specific associations, students often introduce surprising and intriguing examples in their projects, especially when exploring industries like pharmaceutical research, communications research, healthcare, behavioural science, people analytics, customer / user experience and other ancillary and adjacent fields with core data literacy and research competencies. It takes a few sessions for everyone to understand that an industry association’s role is to serve the public interest, represent sector professionals, and provide a collective body of knowledge, resources, connections, and best practices. One common question I get is whether bad actors are really expelled from self-regulated industries and associations like marketing research, data, analytics, insights, and this often leads to lively discussions that move the conversation to research accuracy—both in terms of prediction and reporting.

In Germany, there is the Council of Social Research, the independent body that is auditing complaints disconnected / independent from associations. Professionalization comes into an industry where there is a lot of active learning happening – uniquely perceived as the domain of the regulated industries. This is why some may wonder about the “motivations” for researchers to be or stay ethical. We had an interesting back and forth on this about the view that all industries should be regulated in our previous webinar. As the conversation deepens, students also present observations about associations with more nuanced literature on AI versus those that invest less in sharing such knowledge resources especially when it comes to setting guardrails. Attractively presented handbooks and publicly accessible resources make associations stand out better as guardians of industry trusted good practice in the eyes of top global talent.



























