It’s Still Your Call: Ethics in the Age of AI
AI is rapidly transforming how we work, learn, and communicate. But as it evolves, we face new questions: What decisions should we let AI make? Who is responsible when it gets things wrong? And how do we ensure that the technology reflects – not replaces – our values and ideas?
In this post, we’ll explore the ethical dimensions of artificial intelligence (AI) – and ideally, leave with more questions than answers.
What is AI?
The underlying technology that comprises modern AI is called machine learning (ML). ML takes large sets of data and correlates it. By correlating the similarities between millions of pictures of birds, a program can be made to accurately detect whether an existing picture contains a bird. This program is called a model and takes time and computing power to create (or ‘train’).
In Analytical AI, ML models are often used to quickly correlate or find patterns in data (called data mining), or predict future outcomes based on historical data (called predictive analytics). If ML models are instead designed to create new content (e.g., pictures of birds), then it’s called Generative AI.
What are ethics?
Ethics are moral principles that influence human behavior. They tell us what we should do, and not what laws and regulations require we do.
Ethics related to AI technologies and their development
AI is not intelligent in the way that humans are. Instead, AI merely behaves intelligently – it can perform a specific task as good as or better than what a human could. As a result, we say that AI has agency (i.e., the ability to act and make decisions independently) but not intelligence. This is why AI still depends on human intelligence to direct it to perform tasks, as well as interpret and evaluate the results. Humans are necessary for AI, and also AI’s biggest customer.
AI technologies also differ in how much agency they have:
- Smart AI technologies have limited agency that perform a small number of specialized tasks (e.g., a smart thermostat).
- Artificial General Intelligence (AGI) technologies use models trained on big data and perform a wide range of tasks, including cognitive reasoning.
In our Internet-connected digital world there are no territorial boundaries and we have the ability to collect immense amounts of data. Because AI systems are trained on massive amounts of data – often collected without users’ full awareness – concerns about privacy, bias, and transparency have become common in the past decade. In short, ethical AI development today means:
- Using data responsibly and fairly.
- Avoiding harm (discrimination, misinformation, erosion of trust).
- Solving the right problems (i.e., Should this problem be solved by AI?).
- Making results explainable and verifiable so that they can be trusted (commonly called Explainable AI).
Unfortunately, many companies that create ML models or provide AI solutions are not ethical. Some selectively adopt only the ethical guidelines that support a company’s goals (called ethics shopping) or falsely make themselves appear more ethical than they actually are (called ethics blue washing). Others use ethics as an excuse to delay or avoid regulation (called ethics lobbying) or gradually normalize unethical practices over time (called ethics shirking). And some AI companies avoid local regulations by offloading risky AI practices onto countries with weak laws (called ethics dumping).
Ethics related to AI usage
Most of us won’t build ML models or AI solutions – we’ll use them. Today’s apps, browsers, and devices are increasingly “powered by AI” whether we like it or not. This brings up new ethical questions, not just for developers, but for users too. For example:
Should you use AI to write a personal letter or eulogy?
- Would your answer differ if you found it difficult or impossible to write the eulogy or personal letter without the help of AI?
- Would your answer differ depending on the person the eulogy or personal letter was written for?
Should you credit AI for rewriting a paragraph?
- Would your answer differ based on whether the AI was embedded in Microsoft Word, or whether you used an online tool such as ChatGPT?
- Would your answer change as this practice becomes the norm for more and more people and job roles?
Should you credit the use of AI if you used AI to generate several pages of content?
- Would your answer change if you tweaked and reworded much of the content, or if the content was generated based on a detailed framework of ideas you provided to the AI itself?
- Would your answer change knowing that the AI was trained on public data that you could Google?
- Would your answer change knowing that the US Copyright Office has stated that AI-generated work is not copyrightable?
Should you use AI to obtain an answer for a class assignment?
- Would your answer change if you tweaked the AI-generated content? And if so, how much tweaking is required to consider it ethical?
- Would your answer change if you knew that the use of AI was against class policies?
- Would your answer change if you knew that the use of AI could not be accurately detected by your teacher?
And deeper still:
- What if the AI model was trained on biased or stolen data?
- What if the AI’s results are inaccurate, or can’t be explained?
- What if the company providing the AI has a record of unethical behavior (e.g., ethics shopping, blue washing, lobbying, shirking, or dumping)?
Remember that the answers to these questions are judgment calls, not legal rules, and that asking them of yourself is a key part of ethical AI use. Answers will differ by context, culture, and personal values… and will certainly change over time as AI becomes more commonplace in our lives and jobs.
But our ethical responsibility remains constant. Ethical AI doesn’t just happen when ML models are designed and created – it happens each and every time we choose to use an AI technology.