AI can be a force for good or ill in society, so everyone must shape it, not just the ‘tech guys’
Key Excerpts from Article on Website of The Guardian (One of the UK's Leading Newspapers)
Posted: August 20th, 2023
Superpower. Catastrophic. Revolutionary. Irresponsible. Efficiency-creating. Dangerous. These terms have been used to describe artificial intelligence over the past several months. The release of ChatGPT to the general public thrusts AI into the limelight, and many are left wondering: what will happen when the way we do business and live our lives changes entirely? Generative AI may impress us with its ability to produce headshots, plan vacation agendas, create work presentations, and even write new code, but that does not mean it can solve every problem. Despite the technological hype, those deciding how to use AI should first ask community members: “What are your needs?” and “What are your dreams?”. The answers to these questions should drive constraints for developers to implement, and should drive the decision about whether and how to use AI. Whose role is it to balance the design of AI tools with the decision about when to use AI systems, and the need to mitigate harms that AI can inflict? Everyone has a role to play. Technologists and organisational leaders have clear responsibilities in the design and deployment of AI systems. Policymakers have the ability to set guidelines for the development and use of AI ... to direct it in ways that minimise harm to individuals. Funders and investors can support AI systems that centre humans and encourage timelines that allow for community input and community analysis. All these roles must work together.
Note: Another recent Guardian article titled "Fantasy fears about AI are obscuring how we already abuse machine intelligence" questions the ethics behind our use of this new technology. More specifically, how the fears of AI bury conversations about the governments and corporations that run and deploy the for political ends.