Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics. This article is an update of an earlier article. Views are his own.
Artificial intelligence and machine learning technologies are rapidly transforming society and will continue to do so in the coming decades. This social transformation will have deep ethical impact, with these powerful new technologies both improving and disrupting human lives. AI, as the externalization of human intelligence, offers us in amplified form everything that humanity already is, both good and evil. Much is at stake. At this crossroads in history we should think very carefully about how to make this transition, or we risk empowering the grimmer side of our nature, rather than the brighter.
Why is AI ethics becoming a problem now? Machine learning (ML) through neural networks is advancing rapidly for three reasons: 1) Huge increase in the size of data sets; 2) Huge increase in computing power; 3) Huge improvement in ML algorithms and more human talent to write them. All three of these trends are centralizing of power, and “With great power comes great responsibility” .
As an institution, the Markkula Center for Applied Ethics has been thinking deeply about the ethics of AI for several years. This article began as presentations delivered at academic conferences and has since expanded to an academic paper (links below) and most recently to a presentation of “Artificial Intelligence and Ethics: Sixteen Issues” I have given in the U.S. and internationally . In that spirit, I offer this current list:
1. Technical Safety
The first question for any technology is whether it works as intended. Will AI systems work as they are promised or will they fail? If and when they fail, what will be the results of those failures? And if we are dependent upon them, will we be able to survive without them?
For example, several people have died in a semi-autonomous car accident because vehicles encountered situations in which they failed to make safe decisions. While writing very detailed contracts that limit liability might legally reduce a manufacturer’s responsibility, from a moral perspective, not only is responsibility still with the company, but the contract itself can be seen as an unethical scheme to avoid legitimate responsibility.
The question of technical safety and failure is separate from the question of how a properly-functioning technology might be used for good or for evil (questions 3 and 4, below). This question is merely one of function, yet it is the foundation upon which all the rest of the analysis must build.
2. Transparency and Privacy
Once we have determined that the technology functions adequately, can we actually understand how it works and properly gather data on its functioning? Ethical analysis always depends on getting the facts first—only then can evaluation begin.
It turns out that with some machine learning techniques such as deep learning in neural networks it can be difficult or impossible to really understand why the machine is making the choices that it makes. In other cases, it might be that the machine can explain something, but the explanation is too complex for humans to understand.
For example, in 2014 a computer proved a mathematical theorem, using a proof that was, at the time at least, longer than the entire Wikipedia encyclopedia . Explanations of this sort might be true explanations, but humans will never know for sure.
As an additional point, in general, the more powerful someone or something is, the more transparent it ought to be, while the weaker someone is, the more right to privacy he or she should have. Therefore the idea that powerful AIs might be intrinsically opaque is disconcerting.
3. Beneficial Use & Capacity for Good
The main purpose of AI is, like every other technology, to help people lead longer, more flourishing, more fulfilling lives. This is good, and therefore insofar as AI helps people in these ways, we can be glad and appreciate the benefits it gives to us.
Additional intelligence will likely provide improvements in nearly every field of human endeavor, including, for example, archaeology, biomedical research, communication, data analytics, education, energy efficiency, environmental protection, farming, finance, legal services, medical diagnostics, resource management, space exploration, transportation, waste management, and so on.
As just one concrete example of a benefit from AI, some farm equipment now has computer systems capable of visually identifying weeds and spraying them with tiny targeted doses of herbicide. This not only protects the environment by reducing the use of chemicals on crops, but it also protects human health by reducing exposure to these chemicals.
4. Malicious Use & Capacity for Evil
A perfectly well functioning technology, such as a nuclear weapon, can, when put to its intended use, cause immense evil. Artificial intelligence, like human intelligence, will be used maliciously, there is no doubt.
For example, AI-powered surveillance is already widespread, in both appropriate contexts (e.g., airport-security cameras), perhaps inappropriate ones (e.g., products with always-on microphones in our homes), and conclusively inappropriate ones (e.g., products which help authoritarian regimes identify and oppress their citizens). Other nefarious examples can include AI-assisted computer-hacking and lethal autonomous weapons systems (LAWS), a.k.a. “killer robots.” Additional fears, of varying degrees of plausibility, include scenarios like those in the movies “2001: A Space Odyssey,” “Wargames,” and “Terminator.”
While movies and weapons technologies might seem to be extreme examples of how AI might empower evil, we should remember that competition and war are always primary drivers of technological advance, and that militaries and corporations are working on these technologies right now. History also shows that great evils are not always completely intended (e.g., stumbling into World War I and various nuclear close-calls in the Cold War), and so having destructive power, even if not intending to use it, still risks catastrophe. Because of this, forbidding, banning, and relinquishing certain types of technology would be the most prudent solution.
5. Bias in Data, Training Sets, etc.
One of the interesting things about neural networks, the current workhorses of artificial intelligence, is that they effectively merge a computer program with the data that is given to it. This has many benefits, but it also risks biasing the entire system in unexpected and potentially detrimental ways.
Already algorithmic bias has been discovered, for example, in areas ranging from criminal punishment to photograph captioning. These biases are more than just embarrassing to the corporations which produce these defective products; they have concrete negative and harmful effects on the people who are the victims of these biases, as well as reducing trust in corporations, government, and other institutions which might be using these biased products. Algorithmic bias is one of the major concerns in AI right now and will remain so in the future unless we endeavor to make our technological products better than we are. As one person said at the first meeting of the Partnership on AI, “We will reproduce all of our human faults in artificial form unless we strive right now to make sure that we don’t” .
6. Unemployment / Lack of Purpose & Meaning
Many people have already perceived that AI will be a threat to certain categories of jobs. Indeed, automation of industry has been a major contributing factor in job losses since the beginning of the industrial revolution. AI will simply extend this trend to more fields, including fields that have been traditionally thought of as being safer from automation, for example law, medicine, and education. It is not clear what new careers unemployed people ultimately will be able to transition into, although the more that labor has to do with caring for others, the more likely people will want to be dealing with other humans and not AIs.
Attached to the concern for employment is the concern for how humanity spends its time and what makes a life well-spent. What will millions of unemployed people do? What good purposes can they have? What can they contribute to the well-being of society? How will society prevent them from becoming disillusioned, bitter, and swept up in evil movements such as white supremacy and terrorism?
7. Growing Socio-Economic Inequality
Related to the unemployment problem is the question of how people will survive if unemployment rises to very high levels. Where will they get money to maintain themselves and their families? While prices may decrease due to lowered cost of production, those who control AI will also likely rake in much of the money that would have otherwise gone into the wages of the now-unemployed, and therefore economic inequality will increase. This will also affect international economic disparity, and therefore is likely a major threat to less-developed nations.
Some have suggested a universal basic income (UBI) to address the problem, but this will require a major restructuring of national economies. Various other solutions to this problem may be possible, but they all involve potentially major changes to human society and government. Ultimately this is a political problem, not a technical one, so this solution, like those to many of the problems described here, needs to be addressed at the political level.
8. Environmental Effects
Machine learning models require enormous amounts of energy to train, so much energy that the costs can run into the tens of millions of dollars or more. Needless to say, if this energy is coming from fossil fuels, this is a large negative impact on climate change, not to mention being harmful at other points in the hydrocarbon supply chain.
Machine learning can also make electrical distribution and use much more efficient, as well as working on solving problems in biodiversity, environmental research, resource management, etc. AI is in some very basic ways a technology focused on efficiency, and energy efficiency is one way that its capabilities can be directed.
On balance, it looks like AI could be a net positive for the environment —but only if it is actually directed towards that positive end, and not just towards consuming energy for other uses.
9. Automating Ethics
One strength of AI is that it can automate decision-making, thus lowering the burden on humans and speeding up – potentially greatly speeding up—some kinds of decision-making processes. However, this automation of decision making will presents huge problems for society, because if these automated decisions are good, society will benefit, but if they are bad, society will be harmed.
As AI agents are given more powers to make decisions, they will need to have ethical standards of some sort encoded into them. There is simply no way around it: the ethical decision-making process might be as simple as following a program to fairly distribute a benefit, wherein the decision is made by humans and executed by algorithms, but it also might entail much more detailed ethical analysis, even if we humans would prefer that it did not—this is because Ai will operate so much faster than humans can, that under some circumstances humans will be left “out of the loop” of control due to human slowness. This already occurs with cyberattacks, and high-frequency trading (both of which are filled with ethical questions which are typically ignored) and it will only get worse as AI expands its role in society.
Since AI can be so powerful, the ethical standards we give to it had better be good.
10. Moral Deskilling & Debility
If we turn over our decision-making capacities to machines, we will become less experienced at making decisions. For example, this is a well-known phenomenon among airline pilots: the autopilot can do everything about flying an airplane, from take-off to landing, but pilots intentionally choose to manually control the aircraft at crucial times (e.g., take-off and landing) in order to maintain their piloting skills.
Because one of the uses of AI will be to either assist or replace humans at making certain types of decisions (e.g. spelling, driving, stock-trading, etc.), we should be aware that humans may become worse at these skills. In its most extreme form, if AI starts to make ethical and political decisions for us, we will become worse at ethics and politics. We may reduce or stunt our moral development precisely at the time when our power has become greatest and our decisions the most important.
This means that the study of ethics and ethics training are now more important than ever. We should determine ways in which AI can actually enhance our ethical learning and training. We should never allow ourselves to become deskilled and debilitated at ethics, or when our technology finally does present us with hard choices to make and problems we must solve—choices and problems that, perhaps, our ancestors would have been capable of solving—future humans might not be able to do it.
For more on deskilling, see this article  and Shannon Vallor’s original article on the topic .
11. AI Consciousness, Personhood, and “Robot Rights”
Some thinkers have wondered whether AIs might eventually become self-conscious, attain their own volition, or otherwise deserve recognition as persons like ourselves. Legally speaking, personhood has been given to corporations and (in other countries) rivers, so there is certainly no need for consciousness even before legal questions may arise.
Morally speaking, we can anticipate that technologists will attempt to make the most human-like AIs and robots possible, and perhaps someday they will be such good imitations that we will wonder if they might be conscious and deserve rights—and we might not be able to determine this conclusively. If future humans do conclude AIs and robots might be worthy of moral status, then we ought to err on the side of caution and give it.
In the midst of this uncertainty about the status of our creations, what we will know is that we humans have moral characters and that, to follow an inexact quote of Aristotle, “we become what we repeatedly do” . So we ought not to treat AIs and robots badly, or we might be habituating ourselves towards having flawed characters, regardless of the moral status of the artificial beings we are interacting with. In other words, no matter the status of AIs and robots, for the sake of our own moral characters we ought to treat them well, or at least not abuse them.
12. AGI and Superintelligence
If or when AI reaches human levels of intelligence, doing everything that humans can do as well the average human can, then it will be an Artificial General Intelligence—an AGI—and it will be the only other such intelligence to exist on Earth at the human level.
If or when AGI exceeds human intelligence, it will become a superintelligence, an entity potentially vastly more clever and capable than we are: something humans have only ever related to in religions, myths, and stories.
Importantly here, AI technology is improving exceedingly fast. Global corporations and governments are in a race to claim the powers of AI as their own. Equally importantly, there is no reason why the improvement of AI would stop at AGI. AI is scalable and fast. Unlike a human brain, if we give AI more hardware it will do more and more, faster and faster.
The advent of AGI or superintelligence will mark the dethroning of humanity as the most intelligent thing on Earth. We have never faced (in the material world) anything smarter than us before. Every time Homo sapiens encountered other intelligent human species in the history of life on Earth, the other species either genetically merged with us (as Neanderthals did) or was driven extinct. As we encounter AGI and superintelligence, we ought to keep this in mind; though, because AI is a tool, there may be ways yet to maintain an ethical balance between human and machine.
13. Dependency on AI
Humans depend on technology. We always have, ever since we have been “human;” our technological dependency is almost what defines us as a species. What used to be just rocks, sticks, and fur clothes has now become much more complex and fragile, however. Losing electricity or cell connectivity can be a serious problem, psychologically or even medically (if there is an emergency). And there is no dependence like intelligence dependence.
Intelligence dependence is a form of dependence like that of a child to an adult. Much of the time, children rely on adults to think for them, and in our older years, as some people experience cognitive decline, the elderly rely on younger adults too. Now imagine that middle-aged adults who are looking after children and the elderly are themselves dependent upon AI to guide them. There would be no human “adults” left—only “AI adults.” Humankind would have become a race of children to our AI caregivers.
This, of course, raises the question of what an infantilized human race would do if our AI parents ever malfunctioned. Without that AI, if dependent on it, we could become like lost children not knowing how to take care of ourselves or our technological society. This “lostness” already happens when smartphone navigation apps malfunction (or the battery just runs out), for example.
We are already well down the path to technological dependency. How can we prepare now so that we can avoid the dangers of specifically intelligence dependency on AI?
14. AI-powered Addiction
Smartphone app makers have turned addiction into a science, and AI-powered video games and apps can be addictive like drugs. AI can exploit numerous human desires and weaknesses including purpose-seeking, gambling, greed, libido, violence, and so on.
Addiction not only manipulates and controls us; it also prevents us from doing other more important things—educational, economic, and social. It enslaves us and wastes our time when we could be doing something worthwhile. With AI constantly learning more about us and working harder to keep us clicking and scrolling, what hope is there for us to escape its clutches? Or, rather, the clutches of the app makers who create these AIs to trap us—because it is not the AIs that choose to treat people this way, it is other people.
When I talk about this topic with any group of students, I discover that all of them are “addicted” to one app or another. It may not be a clinical addiction, but that is the way that the students define it, and they know they are being exploited and harmed. This is something that app makers need to stop doing: AI should not be designed to intentionally exploit vulnerabilities in human psychology.
15. Isolation and Loneliness
Society is in a crisis of loneliness. For example, recently a study found that “200,000 older people in the UK have not had a conversation with a friend or relative in more than a month” . This is a sad state of affairs because loneliness can literally kill . It is a public health nightmare, not to mention destructive of the very fabric of society: our human relationships. Technology has been implicated in so many negative social and psychological trends, including loneliness, isolation, depression, stress, and anxiety, that it is easy to forget that things could be different, and in fact were quite different only a few decades ago.
One might think that “social” media, smartphones, and AI could help, but in fact they are major causes of loneliness since people are facing screens instead of each other. What does help are strong in-person relationships, precisely the relationships that are being pushed out by addictive (often AI-powered) technology.
Loneliness can be helped by dropping devices and building quality in-person relationships. In other words: caring.
This may not be easy work and certainly at the societal level it may be very difficult to resist the trends we have already followed so far. But resist we should, because a better, more humane world is possible. Technology does not have to make the world a less personal and caring place—it could do the opposite, if we wanted it to.
16. Effects on the Human Spirit
All of the above areas of interest will have effects on how humans perceive themselves, relate to each other, and live their lives. But there is a more existential question too. If the purpose and identity of humanity has something to do with our intelligence (as several prominent Greek philosophers believed, for example), then by externalizing our intelligence and improving it beyond human intelligence, are we making ourselves second-class beings to our own creations?
This is a deeper question with artificial intelligence which cuts to the core of our humanity, into areas traditionally reserved for philosophy, spirituality, and religion. What will happen to the human spirit if or when we are bested by our own creations in everything that we do? Will human life lose meaning? Will we come to a new discovery of our identity beyond our intelligence?
Perhaps intelligence is not really as important to our identity as we might think it is, and perhaps turning over intelligence to machines will help us to realize that. If we instead find our humanity not in our brains, but in our hearts, perhaps we will come to recognize that caring, compassion, kindness, and love are ultimately what make us human and what make life worth living. Perhaps by taking away some of the tedium of life, AI can help us to fulfill this vision of a more humane world.
There are more issues in the ethics of AI; here I have just attempted to point out some major ones. Much more time could be spent on topics like AI-powered surveillance, the role of AI in promoting misinformation and disinformation, the role of AI in politics and international relations, the governance of AI, and so on.
New technologies are always created for the sake of something good—and AI offers us amazing new abilities to help people and make the world a better place. But in order to make the world a better place we need to choose to do that, in accord with ethics.
Through the concerted effort of many individuals and organizations, we can hope that AI technology will help us to make a better world.
This article builds upon the following previous works: “AI: Ethical Challenges and a Fast Approaching Future” (Oct. 2017) ,“Some Ethical and Theological Reflections on Artificial Intelligence,” (Nov. 2017) , Artificial Intelligence and Ethics: Ten areas of interest (Nov. 2017) , “AI and Ethics” (Mar. 2018) , “Ethical Reflections on Artificial Intelligence”(Aug. 2018) , and several presentations of “Artificial Intelligence and Ethics: Sixteen Issues” (2019-20) .
 Brian Patrick Green, “Artificial Intelligence and Ethics: Ten areas of interest,” Markkula Center for Applied Ethics website, Nov 21, 2017.
 Originally paraphrased in Stan Lee and Steve Ditko, “Spider-Man,” Amazing Fantasy vol. 1, #15 (August 1962), exact phrase from Uncle Ben in J. Michael Straczynski, Amazing Spider-Man vol. 2, #38 (February 2002). For more information: https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility
 Brian Patrick Green, “Artificial Intelligence and Ethics: Sixteen Issues,” various locations and dates: Los Angeles, Mexico City, San Francisco, Santa Clara University (2019-2020).
 Bob Yirka, “Computer generated math proof is too large for humans to check,” Phys.org, February 19, 2014, available at: https://phys.org/news/2014-02-math-proof-large-humans.html
 The Partnership on AI to Benefit People and Society, Inaugural Meeting, Berlin, Germany, October 23-24, 2017.
 Leila Scola, “AI and the Ethics of Energy Efficiency,” Markkula Center for Applied Ethics website, May 26, 2020, available at: https://www.scu.edu/environmental-ethics/resources/ai-and-the-ethics-of-energy-efficiency/
 Brian Patrick Green, “Artificial Intelligence, Decision-Making, and Moral Deskilling,” Markkula Center for Applied Ethics website, Mar 15, 2019, available at: https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/artificial-intelligence-decision-making-and-moral-deskilling/
 Shannon Vallor, “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.”Philosophy of Technology28 (2015):107–124., available at:https://link.springer.com/article/10.1007/s13347-014-0156-9
 Brad Sylvester, “Fact Check: Did Aristotle Say, ‘We Are What We Repeatedly Do’?” Check Your Fact website, June 26, 2019, available at: https://checkyourfact.com/2019/06/26/fact-check-aristotle-excellence-habit-repeatedly-do/
 Lee Mannion, “Britain appoints minister for loneliness amid growing isolation,” Reuters, January 17, 2018, available at: https://www.reuters.com/article/us-britain-politics-health/britain-appoints-minister-for-loneliness-amid-growing-isolation-idUSKBN1F61I6
 Julianne Holt-Lunstad, Timothy B. Smith, Mark Baker,Tyler Harris, and David Stephenson, “Loneliness and Social Isolation as Risk Factors for Mortality: A Meta-Analytic Review,” Perspectives on Psychological Science 10(2) (2015): 227–237, available at: https://journals.sagepub.com/doi/full/10.1177/1745691614568352
 Markkula Center for Applied Ethics Staff, “AI: Ethical Challenges and a Fast Approaching Future: A panel discussion on artificial intelligence,” with Maya Ackerman, Sanjiv Das, Brian Green, and Irina Raicu, Santa Clara University, California, October 24, 2017, posted to the All About Ethics Blog, Oct 31, 2017, video available at: https://www.scu.edu/ethics/all-about-ethics/ai-ethical-challenges-and-a-fast-approaching-future/
 Brian Patrick Green, “Some Ethical and Theological Reflections on Artificial Intelligence,” Pacific Coast Theological Society (PCTS) meeting, Graduate Theological Union, Berkeley, 3-4 November, 2017, available at: http://www.pcts.org/meetings/2017/PCTS2017Nov-Green-ReflectionsAI.pdf
 Brian Patrick Green, “AI and Ethics,” guest lecture in PACS003: What is an Ethical Life?, University of the Pacific, Stockton, March 21, 2018.
 Brian Patrick Green, “Ethical Reflections on Artificial Intelligence,” Scientia et Fides 6(2), 24 August 2018. Available at: http://apcz.umk.pl/czasopisma/index.php/SetF/article/view/SetF.2018.015/15729
Thank you to many people for all the helpful feedback which has helped me develop this list, including Maya Ackermann, Kirk Bresniker, Sanjiv Das, Kirk Hanson, Brian Klunk, Thane Kreiner, Angelus McNally, Irina Raicu, Leila Scola, Lili Tavlan, Shannon Vallor, the employees of several tech companies, the attendees of the PCTS Fall 2017 meeting, the attendees of the needed.education meetings, several anonymous reviewers, the professors and students of PACS003 at the University of the Pacific, the students of my ENGR 344: AI and Ethics course, as well as many more.
But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.
- Biases. We need data to train our artificial intelligence algorithms, and we need to do everything we can to eliminate bias in that data. ...
- Control and the Morality of AI. ...
- Privacy. ...
- Power Balance. ...
- Ownership. ...
- Environmental Impact. ...
An ethical AI system must be inclusive, explainable, have a positive purpose and use data responsibly. An inclusive AI system is unbiased and works equally well across all spectra of society.
- AI systems must be transparent. ...
- AI systems must be equipped with an “ethical black box” ...
- AI must serve people and planet. ...
- Adopt a human-in-command approach. ...
- Ensure a genderless, unbiased AI. ...
- Share the benefits of AI systems.
To fully achieve the potential of AI in healthcare, four major ethical issues must be addressed: (1) informed consent to use data, (2) safety and transparency, (3) algorithmic fairness and biases, and (4) data privacy are all important factors to consider (27).
The AI is tasked with the implementation of the program and is limited by it, and as such the human is never truly taken out of the system and possesses an ethically important degree of control and influence on the type of and ability to make ethical decisions.
Artificial intelligence can dramatically improve the efficiencies of our workplaces and can augment the work humans can do. When AI takes over repetitive or dangerous tasks, it frees up the human workforce to do work they are better equipped for—tasks that involve creativity and empathy among others.
- AI drives down the time taken to perform a task. ...
- AI enables the execution of hitherto complex tasks without significant cost outlays.
- AI operates 24x7 without interruption or breaks and has no downtime.
- AI augments the capabilities of differently abled individuals.
Ethical AI ensures that the AI initiatives of the organization or entity maintain human dignity and do not in any way cause harm to people. That encompasses many things, such as fairness, anti-weaponization and liability, such as in the case of self-driving cars that encounter accidents.
An ethical issue is a circumstance in which a moral conflict arises in the workplace; thus, it is a situation in which a moral standard is being challenged. Ethical issues in the workplace occur when a moral dilemma emerges and must be resolved within a corporation.
Algorithms can enhance already existing biases. They can discriminate. They can threaten our security, manipulate us and have lethal consequences. For these reasons, people need to explore the ethical, social and legal aspects of AI systems.
Their principles underscore fairness, transparency and explainability, human-centeredness, and privacy and security.
The four pillars of Responsible AI
Organizations need to tackle a central challenge: translating ethical principles into practical, measurable metrics that work for them. To embed these into everyday processes, they also need the right organizational, technical, operational, and reputational scaffolding.
- Be socially beneficial. ...
- Avoid creating or reinforcing unfair bias. ...
- Be built and tested for safety. ...
- Be accountable to people. ...
- Incorporate privacy design principles. ...
- Uphold high standards of scientific excellence. ...
- Be made available for uses that accord with these principles.
- Cost to innovation.
- Harm to physical integrity.
- Lack of access to public services.
- Lack of trust.
- “Awakening” of AI.
- Security problems.
- Lack of quality data.
- Disappearance of jobs.
Underuse of AI is considered as a major threat: missed opportunities for the EU could mean poor implementation of major programmes, such as the EU Green Deal, losing competitive advantage towards other parts of the world, economic stagnation and poorer possibilities for people.
Prominent examples of AI software used in everyday life include voice assistants, image recognition for face unlock in mobile phones, and ML-based financial fraud detection. AI software usually involves just downloading software with AI capabilities from an online store and requires no peripheral devices.
A new study demonstrates that artificial intelligence (AI) can be used to influence human decision-making by exploiting vulnerabilities in an individual's habits and patterns.
AI algorithms will enable doctors and hospitals to better analyze data and customize their health care to the genes, environment and lifestyle of each patient. From diagnosing brain tumors to deciding which cancer treatment will work best for an individual, AI will drive the personalized medicine revolution.
Privacy and AI
Probably the greatest challenge facing the AI industry is the need to reconcile AI's need for large amounts of structured or standardized data with the human right to privacy. AI's 'hunger' for large data sets is in direct tension with current privacy legislation and culture.
An ethical issue is a circumstance in which a moral conflict arises in the workplace; thus, it is a situation in which a moral standard is being challenged. Ethical issues in the workplace occur when a moral dilemma emerges and must be resolved within a corporation.
- Data Scarcity.
- Limited Implementation.
- Data Privacy and Security.
- Transparency of Algorithm.
- Automation-spurred job loss.
- Privacy violations.
- Algorithmic bias caused by bad data.
- Socioeconomic inequality.
- Market volatility.
- Weapons automatization.
AI also raises near-term concerns: privacy, bias, inequality, safety and security. CSER's research has identified emerging threats and trends in global cybersecurity, and has explored challenges on the intersection of AI, digitisation and nuclear weapons systems.
Technology can threaten individual autonomy, violate privacy rights (Laczniak and Murphy 2006), and directly harm individuals financially and physically. Technologies can also be morally contentious by “forcing deep reflection on personal values and societal norms” (Cole and Banerjee 2013, p. 555).
Technology ethics is the application of ethical thinking to the practical concerns of technology. The reason technology ethics is growing in prominence is that new technologies give us more power to act, which means that we have to make choices we didn't have to make before.
This approach – focusing on the application of seven mid-level principles to cases (non-maleficence, beneficence, health maximisation, efficiency, respect for autonomy, justice, proportionality) – is presented in this paper.
These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.
- A. Ethics and Responsible Decision-Making. ...
- B. Confidentiality & Privacy. ...
- C. Piracy. ...
- D. Fraud & Misuse. ...
- E. Liability. ...
- F. Patent and Copyright Law. ...
- G. Trade Secrets. ...
- H. Sabotage.