Click any tag below to further narrow down your results
Links
This article discusses the evolution of data governance from a rigid, compliance-focused approach to a more dynamic, context-driven model. It argues that as AI systems become more autonomous, organizations need to shift from controlling data to ensuring accountability and intentionality in how data is used. The author emphasizes the importance of negotiating meaning and maintaining oversight in increasingly complex socio-technical environments.
This article discusses the importance of designing AI systems that prioritize human understanding and accountability. It emphasizes the need for transparency, clear boundaries, and systems that preserve human capability to avoid the pitfalls of automation. The author warns against the dangers of opaque AI designs and advocates for a thoughtful approach to integrating technology into complex systems.
The author critiques the reliance on AI tools like LLMs for code generation, arguing that it undermines the essential thinking and problem-solving skills of developers. They compare generated code to fast fashion—appealing but often flawed—emphasizing the importance of accountability and understanding in software development.
StrongDM's AI team has developed a system where coding agents autonomously write and test software, eliminating human involvement in code creation and review. This raises important questions about accountability and liability, as existing regulatory frameworks struggle to adapt to this new model of software development.
This article outlines Monarch's philosophy on integrating AI into software engineering while maintaining quality and accountability. It emphasizes understanding the latest developments in AI without rushing to adopt every new tool and stresses the importance of individual ownership of work.
This article discusses the concept of Write-Only Code, where production code is generated by AI and often never read by humans. It explores the implications for software development roles, accountability, and the need for new practices in managing code that cannot be reviewed line by line.
California's SB 53, a landmark AI transparency bill, has officially become law, requiring companies to disclose their use of artificial intelligence in various applications. The legislation aims to enhance accountability and ensure consumers are aware when AI is employed in decision-making processes impacting them. This move represents a significant step towards regulating the rapidly evolving AI landscape.
The article discusses the concept of an "AI quality coup," where advancements in artificial intelligence threaten to disrupt traditional standards and practices in various fields. It emphasizes the need for a careful balance between innovation and maintaining quality, as the rapid development of AI tools could lead to unintended consequences in decision-making and accountability.