Hidde de Smet presents Part 2 of his series on validating AI projects. This installment demonstrates practical uses of an AI decision framework and examines essential ethical considerations—such as bias, transparency, and workforce impact—when evaluating AI initiatives.

Is AI the Right Solution? Part 2: Examples and Ethical Risks

By Hidde de Smet

Welcome to Part 2 of our series on validating AI projects! In Part 1: The decision Framework, we introduced a structured decision tree to help assess the viability of AI initiatives. Now, let’s explore practical applications of this framework and dive into the crucial ethical considerations that every AI project must address.

Table of Contents

  1. Applying the Framework: Generic Examples
    • Example 1: AI for Process Optimization
    • Example 2: AI for Enhanced Customer Experience
  2. Ethical Considerations and Risks
    • Bias, Fairness, and Transparency
    • Workforce and Societal Impact
    • Privacy and Data Governance
    • Environmental Impact

Applying the Framework: Generic Examples

Example 1: AI for Process Optimization (Manufacturing, Logistics, Back-office)

  • Strategic Alignment: Determine whether optimizing targeted business processes (e.g., reducing defects, streamlining logistics, automating data entry) supports core goals such as cost reduction, improved quality, or operational efficiency.
  • Pillars Evaluation:
    • Objective: Reduce process cycle time by X%, decrease error rates by Y%, save Z in operational costs.
    • Audience/Impact: Affects [number] operators/teams; potential time and material savings quantified by hours or percentages.
    • Training & Data: Needs historical process data—sensor logs, quality records, transactional data. Consider time & cost for data collection, cleansing, and labeling. Assess model training complexity (low/medium/high).
    • Operations: Estimate ongoing cost ($ per month/year) for cloud resources, monitoring, and model retraining.
    • Business Impact: Expect cost reduction or improved efficiency; possible benefits in quality or compliance.
    • Impact Quantification: Calculate potential annual savings (labor reduction, fewer errors, less waste, faster throughput).
    • Feasibility & Effort: Judge as low/medium/high based on data and technical requirements, system integration, change management.
    • ROI Assessment: If high impact and manageable effort, it may qualify as a “Quick win” or “Strategic bet”.

Example 2: AI for Enhanced Customer Experience (Personalization, Support Chatbots)

  • Strategic Alignment: Assess if improvements in personalization or support align with goals to raise satisfaction, retention, or lifetime value.
  • Pillars Evaluation:
    • Objective: Aim to increase satisfaction scores (CSAT/NPS), reduce churn, or boost conversion rates.
    • Audience/Impact: Quantify how many customers are affected and what percentage may see improved engagement.
    • Training & Data: Use customer interaction data, CRM info, support transcripts, feedback; prioritize privacy and governance.
    • Operations: Estimate recurring operational cost.
    • Business Impact: Potential revenue gains (retention, upsell, new customers) or higher satisfaction and loyalty.
    • Impact Quantification: Pinpoint annual revenue growth or savings from churn reduction and increased customer value.
    • Feasibility & Effort: Evaluate data integration, complexity of models, UI/UX needs, and ethical challenges.
    • ROI Assessment: If the results promise significant business impact proportional to the effort, this could be a “Strategic bet”—but only if ethical risks are managed.

Ethical Considerations and Risks

Successful AI projects require careful attention to important ethical risks. Neglect in these areas can result in reputation damage, legal issues, or harm to people. Three main areas to address:

  1. Common Ethical Implications: Bias, fairness, and transparency.
  2. Equitable Access and Outcomes: Ensuring fairness for all groups touched by the AI.
  3. Environmental Impact: AI’s ecological footprint.

1. Bias, Fairness, and Transparency

Bias in AI Models

  • AI learns from data; if data is biased, AI can perpetuate or amplify those biases.
  • Automation and Bias
    • Human designers and trainers impart unconscious bias into models.
    • At scale, this can lead to unfair or discriminatory outcomes even with good intentions.

Example: If an AI is trained to generate images of “Founding Fathers of America” and most sample images are of white men, the AI may exclude other historical contributors, erasing diversity present in actual history.

AI-generated image of Founding Fathers showing bias Example: Potential bias in AI-generated imagery when not carefully managed.

AI-generated image of Founding Fathers more accurate Striving for diversity in datasets and design improves inclusiveness and accuracy of AI outputs.

Transparency and Explainability

  • Understanding how AI systems make decisions builds trust and allows scrutiny for bias or error.
  • Many private companies have little incentive to disclose AI logic, leading to “black box” systems.
  • This opacity can erode trust, as seen in debates over TikTok or Instagram algorithms.
  • Noteworthy exceptions exist:
    • X (formerly Twitter) open-sourced its feed algorithm.
    • GitHub plans to open-source elements of VS Code and Copilot’s AI, as noted in the VS Code blog.

Regulation: The EU AI Act

  • The EU AI Act will demand transparency and risk disclosure for “high-risk” AI systems.
  • Most applications will not be “high-risk,” but adherence to ethical guidelines and transparency is always best practice.

EU ACT The EU AI Act targets high-risk AI systems with mandates for transparency and user awareness.

2. Privacy & Data Governance

  • AI requires significant data, raising privacy challenges:
    • Can we ensure effective data de-identification?
    • How is data from production reused for further training, and is user consent valid and informed?
    • Biometric data and facial recognition invite heightened scrutiny and require strong safeguards.
    • Repurposing data beyond its initial purpose without explicit consent undermines trust.
  • Consider data longevity and disposal—especially of sensitive information.
  • “Click-through” agreements may not provide genuine informed consent for AI training—this remains ethically debatable.

3. Workforce and Societal Impact

  • Automation vs. Augmentation: Full replacement of human workers with AI is costly and not always practical; often AI should augment rather than replace.
  • Essential for employees to develop AI-related skills, akin to digital literacy in previous decades.
  • Some governments (e.g., Singapore) proactively fund upskilling to balance automation benefits and job displacement.
  • Real-world labor impacts exist; Duolingo reportedly cut 10% of its contractor workforce as it leaned more on AI for content and translation.

4. Environmental Impact

  • AI systems have meaningful environmental costs from resource use and compute requirements; these should be measured and mitigated.

Conclusion

By applying a structured framework to validate AI projects, organizations can better judge strategic fit, operational feasibility, and business value. However, without carefully weighing ethical issues—bias, transparency, privacy, workforce, and environment—even technically successful initiatives can cause unintended harm.

Stay tuned for Part 3: Metrics, Piloting, and Key Takeaways, available June 9, 2025, where we’ll discuss defining success and running effective pilot projects.

This post appeared first on “Hidde de Smet’s Blog”. Read the entire article here