Home » Technology » Strategies for Cultivating a Culture of Responsible AI within Teams

Strategies for Cultivating a Culture of Responsible AI within Teams

by Sophie Lin - Technology Editor

virtualmage/iStock/Getty Images Plus via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • IT, engineering, data, and AI teams now lead responsible AI efforts.
  • PwC recommends a three-tier “defense” model.
  • Embed, don’t bolt on, responsible AI in everything.

Responsible AI” is a very hot and important topic these days, and the onus is on technology managers and professionals to ensure that the artificial intelligence work they are doing builds trust while aligning with business goals.

Fifty-six percent of the 310 executives participating in a new PwC survey say their first-line teams — IT, engineering, data, and AI — now lead their responsible AI efforts. “That shift puts responsibility closer to the teams building AI and sees that governance happens where decisions are made, refocusing responsible AI from a compliance conversation to that of quality enablement,” according to the PwC authors.

Also: Consumers more likely to pay for ‘responsible’ AI tools, Deloitte survey says

Responsible AI — associated with eliminating bias and ensuring fairness, transparency, accountability, privacy, and security — is also relevant to business viability and success, according to the PwC survey. “Responsible AI is becoming a driver of business value, boosting ROI, efficiency, and innovation while strengthening trust.”

“Responsible AI is a team sport,” the report’s authors explain. “Clear roles and tight hand-offs are now essential to scale safely and confidently as AI adoption accelerates.” To leverage the advantages of responsible AI, PwC recommends rolling out AI applications within an operating structure with three “lines of defense.”

  • First line: Builds and operates responsibly.
  • Second line: Reviews and governs.
  • Third line: Assures and audits.

The challenge to achieving responsible AI, cited by half the survey respondents, is converting responsible AI principles “into scalable, repeatable processes,” PwC found.

About six in ten respondents (61%) to the PwC survey say responsible AI is actively integrated into core operations and decision-making. Roughly one in five (21%) report being in the training stage, focused on developing employee training, governance structures, and practical guidance. The remaining 18% say they’re still in the early stages, working to build foundational policies and frameworks.

Also: So long, SaaS: Why AI spells the end of per-seat software licenses – and what comes next

Across the industry, there is debate on how tight the reins on AI should be to ensure responsible applications. “There are definitely situations where AI can provide great value, but rarely within the risk tolerance of enterprises,” said Jake Williams, former US National Security Agency hacker and faculty member at IANS Research. “The LLMs that underpin most agents and gen AI solutions do not create consistent output, leading to unpredictable risk. Enterprises value repeatability, yet most LLM-enabled applications are, at best, close to correct most of the time.”

As a result of this uncertainty, “we’re seeing more organizations roll back their adoption of AI initiatives as they realize they can’t effectively mitigate risks, particularly those that introduce regulatory exposure,” Williams continued. “In some cases, this will result in re-scoping applications and use cases to counter that regulatory risk. In other cases, it will result in entire projects being abandoned.”

8 expert guidelines for responsible AI

Industry experts offer the following guidelines for building and managing responsible AI:

1. Build in responsible AI from start to finish: Make responsible AI part of system design and deployment, not an afterthought.

“For tech leaders and managers, making sure AI is responsible starts with how it’s built,” Rohan Sen, principal for cyber, data, and tech risk with PwC US and co-author of the survey report, told ZDNET.

“To build trust and scale AI safely, focus on embedding responsible AI into every stage of the AI development lifecycle, and involve key functions like cyber, data governance, privacy, and regulatory compliance,” said Sen. “Embed governance early and continuously.

Also: 6 essential rules for unleashing AI on your software development process – and the No. 1 risk

2. Give AI a purpose — not just to deploy AI for AI’s sake: “Too often, leaders and their tech teams treat AI as a tool for experimentation, generating countless bytes of data simply because they can,” said Danielle An, senior software architect at Meta.

“Use technology with taste, discipline, and purpose. Use AI to sharpen human intuition — to test ideas, identify weak points, and accelerate informed decisions. Design systems that enhance human judgment, not replace it.”

3. Underscore the importance of responsible AI up front: According to Joseph Logan, chief information officer at iManage, responsible AI initiatives “should start with clear policies that define acceptable AI use and clarify what’s prohibited.”

“Start with a value statement around ethical use,” said Logan. “From here, prioritize periodic audits and consider a steering committee that spans privacy, security, legal, IT, and procurement. Ongoing transparency and open communication are paramount so users know what’s approved, what’s pending, and what’s prohibited. Additionally, investing in training can help reinforce compliance and ethical usage.”

4. Make responsible AI a key part of jobs: Responsible AI practices and oversight need to be as much of a priority as security and compliance, said Mike Blandina, chief information officer at Snowflake. “Ensure models are transparent, explainable, and free from harmful bias.”

Also key to such an effort are governance frameworks that meet the requirements of regulators, boards, and customers. “These frameworks need to span the entire AI lifecycle — from data sourcing, to model training, to deployment, and monitoring.”

Also: The best free AI courses and certificates for upskilling – and I’ve tried them all

5. Keep humans in the loop at all stages: Make it a priority to “continually discuss how to responsibly use AI to increase value for clients while ensuring that both data security and IP concerns are addressed,” said Tony Morgan, senior engineer at Priority Designs.

“Our IT team reviews and scrutinizes every AI platform we approve to make sure it meets our standards to protect us and our clients. For respecting new and existing IP, we make sure our team is educated on the latest models and methods, so they can apply them responsibly.”

6. Avoid acceleration risk: Many tech teams have “an urge to put generative AI into production before the team has a returned answer on question X or risk Y,” said Andy Zenkevich, founder & CEO at Epiic.

“A new AI capability will be so exciting that projects will charge ahead to use it in production. The result is often a spectacular demo. Then things break when real users start to rely on it. Maybe there’s the wrong kind of transparency gap. Maybe it’s not clear who’s accountable if you return something illegal. Take extra time for a risk map or check model explainability. The business loss from missing the initial deadline is nothing compared to correcting a broken rollout.”

Also: Everyone thinks AI will transform their business – but only 13% are making it happen

7. Document, document, document: Ideally, “every decision made by AI should be logged, easy to explain, auditable, and have a clear trail for humans to follow,” said McGehee. “Any effective and sustainable AI governance will include a review cycle every 30 to 90 days to properly check assumptions and make necessary adjustments.”

8. Vet your data: “How organizations source training data can have significant security, privacy, and ethical implications,” said Fredrik Nilsson, vice president, Americas, at Axis Communications.

“If an AI model consistently shows signs of bias or has been trained on copyrighted material, customers are likely to think twice before using that model. Businesses should use their own, thoroughly vetted data sets when training AI models, rather than external sources, to avoid infiltration and exfiltration of sensitive information and data. The more control you have over the data your models are using, the easier it is to alleviate ethical concerns.”

Get the morning’s top stories in your inbox each day with our Tech Today newsletter.


How can an AI Ethics Committee effectively balance innovation with adherence to responsible AI policies?

Strategies for Cultivating a Culture of Responsible AI within Teams

Defining Responsible AI & Its Importance

Responsible AI isn’t just a buzzword; it’s a critical framework for developing and deploying artificial intelligence ethically and sustainably. It encompasses principles like fairness, accountability, openness, and safety. A strong AI ethics foundation isn’t simply about avoiding negative consequences – it’s about building trust with users,stakeholders,and the public. Ignoring responsible AI practices can lead to biased algorithms,privacy violations,and reputational damage. This is increasingly important as AI adoption accelerates across industries.

Establishing Clear AI Governance & Policies

Before diving into implementation, a robust governance structure is essential. This involves:

* AI Ethics Committee: Form a cross-functional team (legal,engineering,product,HR) dedicated to overseeing AI development and ensuring adherence to ethical guidelines.

* AI Policy Documentation: Create clear, concise policies outlining acceptable use of AI, data privacy standards, and bias mitigation strategies. These policies should be readily accessible to all team members.

* Risk Assessment Framework: Implement a process for identifying and evaluating potential risks associated with AI projects before they begin. consider factors like data sensitivity, potential for bias, and impact on vulnerable populations.

* Regular Audits: Conduct periodic audits of AI systems to assess their performance, identify biases, and ensure compliance with established policies. AI auditing is becoming a standard practice.

Fostering AI Literacy Across the Organization

A key barrier to responsible AI is a lack of understanding. Teams need to be equipped with the knowledge to identify and address ethical concerns.

* Training Programs: Offer regular training sessions on AI ethics, bias detection, and data privacy. Tailor training to different roles within the organization.

* Workshops & Simulations: Interactive workshops can help teams apply ethical principles to real-world scenarios. Simulations can demonstrate the potential consequences of biased algorithms.

* resource Hub: Create a centralized repository of resources on responsible AI, including articles, case studies, and best practices.

* Encourage continuous Learning: the field of AI is rapidly evolving. Promote a culture of continuous learning and encourage team members to stay up-to-date on the latest developments in AI ethics and AI safety.

Prioritizing Data Quality & Bias Mitigation

AI models are only as good as the data they are trained on. Biased data leads to biased outcomes.

* Data Diversity: Ensure your training data is representative of the population your AI system will impact. Actively seek out diverse data sources.

* Data Auditing: Regularly audit your data for biases. Tools and techniques are available to help identify and quantify bias in datasets.

* Bias Mitigation techniques: Implement techniques to mitigate bias during data preprocessing, model training, and post-processing. Examples include re-weighting data, adversarial debiasing, and fairness-aware algorithms.

* Data Privacy: adhere to strict data privacy regulations (e.g., GDPR, CCPA) and implement robust data security measures. Data governance is paramount.

Promoting Transparency & Explainability (XAI)

“Black box” AI systems are challenging to trust. Transparency and explainability are crucial for building confidence and accountability.

* Explainable AI (XAI) techniques: Utilize XAI techniques to understand how your AI models are making decisions. this can involve feature importance analysis, rule extraction, and counterfactual explanations.

* Model Documentation: Maintain detailed documentation of your AI models, including their purpose, training data, algorithms, and limitations.

* User-Friendly Explanations: provide users with clear, concise explanations of AI-driven decisions. Avoid technical jargon.

* Feedback Mechanisms: Implement mechanisms for users to provide feedback on AI-driven decisions and report potential biases or errors.

Building Accountability into the AI Lifecycle

Accountability isn’t just about assigning blame; it’s about establishing clear lines of responsibility throughout the entire AI lifecycle.

* Defined Roles & Responsibilities: Clearly define roles and responsibilities for each stage of the AI lifecycle, from data collection to model deployment and monitoring.

* impact Assessments: Conduct thorough impact assessments before deploying AI systems, considering potential social, economic, and ethical consequences.

* Monitoring & Evaluation: Continuously monitor the performance of AI systems and evaluate their impact on stakeholders.

* Incident Response Plan: Develop a plan for responding to incidents involving AI systems, including data breaches, biased outcomes, and unintended consequences.

Real-World Example: ProPublica’s COMPAS Inquiry

ProPublica’s 2016 investigation into the COMPAS recidivism prediction algorithm highlighted the dangers of biased AI. the algorithm was found to be more likely to falsely flag Black defendants as high-risk than white defendants. This case underscored the importance of data diversity, bias mitigation, and transparency in AI systems used in criminal justice. It also spurred notable discussion about algorithmic fairness and the need for AI accountability.

Benefits of a Responsible AI Culture

Cultivating a culture of responsible AI yields significant benefits:

* Enhanced Trust: Builds trust with users,stakeholders,and the public.

* Reduced Risk: Mitigates legal, reputational, and ethical risks.

* Improved Innovation: Fosters innovation by encouraging ethical considerations throughout the AI development process.

* Increased Adoption: Facilitates wider adoption of AI by addressing concerns about fairness, transparency

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.