AI, Ethics, and Empathy: A Deep Dive With Kat Morgan on Arrested DevOps
In this Arrested DevOps episode, Matty Stratton and guest Kat Morgan examine AI’s multifaceted role in development, delving into ethical, practical, and human-centered concerns. The discussion reflects on teamwork with AI, good practice, and building more humane, responsible technology.
AI, Ethics, and Empathy With Kat Morgan
Posted on Tuesday, Jun 3, 2025.
In this episode of Arrested DevOps, Matty Stratton hosts Kat Morgan for an extensive conversation exploring the ethical, practical, and technical landscape of contemporary AI. They discuss how AI — especially large language models (LLMs) — is reshaping daily development practice, productivity, accessibility, and the ethical terrain of the industry.
Key Themes and Takeaways
- Nuance Is Needed: Strong opinions about AI often overlook the complexities involved; AI’s impact is multifaceted and dynamic, requiring ongoing learning and critical thinking.
- Empowering Developers and Teams: Kat and Matty examine how LLMs assist with coding, project planning, and even executive function, especially for neurodivergent individuals.
- Code Hygiene and Best Practices: Collaborating with AI harshly exposes the importance of context organization, modular code, and clear documentation.
- Ethical and Environmental Concerns: They tackle issues like intellectual property, accessibility, environmental impact, security, privacy, and who is responsible for the consequences of AI use.
- The Importance of Empathy: Treating AI agents with respect not only guides good practice but also reinforces positive interpersonal habits in human teams.
Full Transcript Highlights
Kat: I am strongly opposed to abusing the robots. Even if they never achieve sentience.
Matty: This episode examines understanding and good practice for teams maximizing DevOps. AI is everywhere now. What are your lived experiences, Kat?
Kat: The ethical angles are numerous: IP rights and contributors getting fair value, accessibility via LLMs for disabled people, ecological impact, and academic dynamics. We’re likely all going to be responsible — like with documentation or security, everyone has to understand boundaries of AI responsibility. Even abstainers need to question and learn.
Matty: The debate around AI is as broad as asking, “what do you think about computers?” How do we stay educated without being steamrolled?
Kat: I use LLMs heavily for code generation and research, but the landscape is changing fast. I now think I can stay longer in tech thanks to AI alleviating my carpal tunnel issues. The hardware industry will have to accelerate to support models ethically and efficiently. Running LLMs locally is now feasible and critical for accessibility.
Matty: Accessibility is one example; I use ChatGPT for alt text and social media, speeding up tasks I could do manually. With tools like Cursor, I can prototype without needing full context all the time. My experience shows engineers aren’t going away — AI helps best when you already understand the problem.
Kat: When using LLMs for complex projects, much of the work is developing quality context — research, plans, hygiene, etc. Sometimes, with good context, LLMs can complete entire tasks much faster than expected, freeing more time for value-adding work (like reviewing, optimizing, reconsidering user experience).
Matty: AI assists hobby projects, like updating my podcast’s Hugo theme, by removing busywork and maintaining context between sessions. Agents remember the project state even after long gaps.
Kat: LLMs also help with executive function: tracking progress, planning, and estimating work helps manage burnout and context-switching. AI serves as an executive decision regulator, keeping big-picture plans and daily focus aligned.
Matty: Seeing AI as a colleague or pair programmer helps mitigate burnout and maintain momentum. The risks don’t disappear — privacy, data security, and environmental responsibility are ongoing concerns. For example, using local dev containers and service accounts can help secure secrets.
Kat: I maintain strong boundaries: running agents in isolated containers, careful secret management, and good code hygiene are necessary. AI is a powerful tool but exposes risk without context, cleanliness, and intent. Practitioners have to drive responsible adoption.
Matty: Record your decisions in public issues or as part of the agent workflow — carrying context is easier for both humans and AI. Healthy skepticism stays vital, as output is sometimes wrong, repetitive, or context-insensitive unless properly guided.
Kat: AI won’t replace people. It lets us up-level — with well-documented APIs and clean modular design, it can automate the mechanical aspects and let us focus on real value. Good context design is paramount, as is being polite even to digital agents (to reinforce constructive behavior in ourselves and teams).
Matty: Missteps are common — without proper guardrails, LLMs troubleshoot things that aren’t broken or disrupt working code. It’s critical to bring empathy and caution into how we interact with these systems, and with each other.
Kat: Neural networks are modeled after human brains. Being constructive in our interactions with AI is just as important as with humans; negative reinforcement affects us all. Respect and intent are central to healthy technology cultures.
Show Notes
- Navigating burnout and finding meaning in tech
- Ethical challenges: IP, access, environment
- LLMs and neurodivergent-friendly workflows
- Pairing with AI in issues and code reviews
- Private/local agent setups for security and privacy
- Fostering empathy through healthy agent interaction
Quote:
“We actually have to respect our own presence enough to appreciate that what we put out in the world will also change ourselves.” — Kat Morgan
Topics Covered
- Nuanced discourse on AI adoption and skepticism
- Best context/executive function techniques with LLMs
- Developer hygiene: modular code, clean interfaces, release checklists
- Private/secure setups: containers, secrets management
- Empathy and respect in digital and human interactions
Hosts and Guests
- Matty Stratton: Solution Architect at Turbot, global DevOpsDays organizer
- Kat Morgan (usrbinkat): Platform Engineer at Cisco, neurodiversity advocate
Further Reading & Links
Listen and subscribe to future episodes on Spotify, iHeartRadio, Audible, and more. For feedback, visit arresteddevops.com/itunes.
This post appeared first on “Arrested DevOps”. Read the entire article here