PHOTO:
Unsplash
As AI begins to play a much larger role in our daily lives, informing healthcare decisions, making recommendations, helping us resolve customer service issues, talking with us as companion bots, making financial decisions, driving autonomous cars, and helping employees make more informed, faster decisions, it becomes more important that ethics and morality are built into AI applications. AI applications are making decisions that affect people’s privacy, health, finances, jobs, criminal justice, safety, and overall happiness. Ethical AI is no longer an afterthought — it must be built into the fabric of AI from this point forward. This article will look at the ways that ethics and diversity are being built into AI and the importance of doing so.
Tech Giants Stand Behind Ethical AI
To ensure that AI is ethical, it must be transparent and explainable. Unconscious biases must be prevented or removed, and human review processes must occur regularly. Ethical standards must be developed and adhered to by brands that create, develop, and use AI. Companies such as Google and Microsoft have already published ethical AI principles. Microsoft puts its ethical standards into practice through its Office of Responsible AI (ORA), the AI, Ethics, and Effects in Engineering and Research (Aether) Committee, and Responsible AI Strategy in Engineering (RAISE).
Additionally, aside from implementing ethical practices within AI applications, it’s important that brands do not use AI for applications that violate the greater good of the community and the world at large. Google has committed that while it will work with government entities on AI technologies for cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue, it absolutely will not develop AI-based weapons or other technologies “whose principal purpose or implementation is to cause or directly facilitate injury to people.”
Other industry leaders, such as Amazon, have had their own issues with unconscious biases within AI applications, but it is actively working to improve trust in AI and eliminate biases. Finally, there are AI organizations such as OpenAI and the Future of Life Institute that are working with other businesses to ensure that AI applications are designed to be ethical and equitable for everyone.
Josh Feast, MIT alumni, CEO and co-founder of Cogito, an AI contact center coaching system provider, said that responsible and ethical AI has to be a top priority for society, especially with the increased role that such technology plays in nearly all facets of our lives. “To properly build ethics into the fabric of AI, the onus falls on AI business leaders to deeply consider how AI influences human experiences and understand where bias can seep in. Effective and impactful AI can only happen when technology and humans work in symbiosis, and trust must exist for this relationship to be harmonious,” he said.
Related Article: Make Responsible AI Part of Your Company’s DNA
Ethical AI at Stanford University
There has been much discussion about AI at Stanford University over the last 5 years. In 2016 Stanford began a century long project with the goal of studying the changes in AI that occur every five years. The project, entitled “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report” — is a timely and informative look at the momentous period the past five years has been for AI. The report is written by a Study Panel of core multi-disciplinary researchers in the field of AI. The authors are experts whose main professional activity for years has been developing artificial intelligence algorithms or studying their influence on society. The 2021 update discusses ethics and what can be done to improve AI going forward.
The report notes that the negative impact of AI tools is easy to recognize once an AI application is already out in the world, but once it’s there, it’s difficult to negate its impact. A new program at Stanford University is requiring AI researchers to evaluate their proposals to locate any potential negative impact on society before receiving funding. The Ethics and Society Review (ESR) requires AI researchers seeking funding from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) to look into how their AI applications or products might pose negative ethical and societal risks, and to develop methods to lessen those risks. If necessary, they are required to collaborate with an interdisciplinary faculty panel to make sure that the ethical concerns are addressed before funding is provided to them.
Once Again, Unconscious Bias Is a Problem for AI
A recently published paper in Socius, written by Kelly Joyce, PhD, a professor in the College of Arts and Sciences and founding director of the Center for Science, Technology and Society at Drexel, Susan Bell, PhD, a professor in the College of Arts and Sciences, and colleagues, discusses concerns about the inclination to quickly accelerate AI development without also accelerating the training and development practices that are required to develop ethical technology. The paper proposes that there should be a research agenda for a sociology of AI.
Bell recognized the importance of a sociological understanding of data, because an uncritical use of human data in AI sociotechnical systems will have a tendency to reproduce, and perhaps even worsen, pre-existing social inequalities. She said that companies that develop AI systems will often state that it’s the algorithms or platform users that create racist, sexist outcomes, but that sociological scholarship shows clearly how human decision making happens at every step of the development process. Both conscious and unconscious biases are programmed into the data and the code, creating AI applications that are themselves biased. Joyce said in the paper that the understanding that sociology provides about the relationship between human data and long-standing inequalities is necessary to develop AI applications which promote equality.
Transparency is one of the keys to developing ethical AI applications, Feast said. “To implement ethical AI, business leaders must prioritize delivering transparency into the technology and communicating a clear benefit to all users. This extends to supplying education and upskilling opportunities,” he said.
As a recent article points out, racism that has been built into AI-based risk assessment algorithms used by the healthcare industry is responsible for a 46% failure rate in identifying at-risk patients of color. Eliminating such biases, whether they are conscious or not, is vital for AI to be trusted and accepted in society. “Another key component of responsible AI is to actively mitigate the underlying biases of the models and systems deployed,” explained Feast. “At Cogito, for example, we’ve done extensive, published research on de-biasing approaches for examining gender bias in speech emotion recognition.”
Related Article: Why Ethical AI Won’t Catch On Anytime Soon
Diversity and Inclusion in AI Development
It has been said that the IT industry is unconsciously biased against women and people of color, though the industry has made great strides and is actively working to change that. AI application development, along with Machine Learning (ML) provide an amazing opportunity for growth, change, and Diversity, Equity, and Inclusion.
According to Terri Hatcher, chief diversity and inclusion officer at NTT DATA Services, a global innovator of IT and business services, although diversity, equity, and inclusion are important across all industries, it’s particularly important in the tech sector due to the impact that technology has on all of our lives. “Technology is involved in every aspect of our lives – and touches every person no matter their skin color, gender, age, etc. Specifically, as technology like AI is being incorporated into more aspects of our personal and professional lives, there are a lot of positive opportunities — as well as a lot of potential for harmful racial biases,” she said.
Hatcher said that when it comes to AI, DEI needs to be a factor from the beginning, during the idea and design stage. She emphasized that diverse voices should play a role in the complete process. “That means including diverse perspectives during development and new product brainstorms. If you don’t have diverse product developers from the start, there will almost certainly be biases baked into the product design,” said Hatcher. “And this has implications far beyond just the tech industry, as AI becomes more a part of our everyday personal lives — in healthcare, in court cases, in facial recognition software, etc. In each of those scenarios, accidental biases in the programming and design could be incredibly harmful to marginalized groups of people.”
Theresa Kushner, senior director of data intelligence and automation and leader of AI solutions at NTT DATA Services, likened AI to a new baby entering the world, and learning from those around it. “Our problem is that AI, just like a newborn baby, is highly dependent on the data it is fed from transactions and interactions that the model might have. If you teach a young child prejudice, the actions taken by the child are prejudicial. AI is no different. It does unto others what it is taught to do.” Kushner said that when it comes to using AI to help promote DEI, the problem may be the data that is used for modeling. “We don’t always have a good track record for acquiring data in a diverse, equitable, or inclusive fashion. AI is only as good as the environment itself, and organizations that follow the Golden Rule, embracing DEI as a requirement of ethical practices are likely to collect better data, which will lead to more success in AI.”
Final Thoughts
As artificial intelligence plays a larger role in our lives, it is crucial to build a framework of ethical and transparent AI. The tech giants are working to build trust and acceptance of AI, however business leaders must make ethics a priority in their AI endeavors for this to occur. Finally, unconscious biases must be eliminated, and diverse voices need to play a role in the discussion and development of AI to ensure that new biases are not introduced.