[ad_1]
Artificial intelligence (AI) is becoming ubiquitous in our everyday lives.
Whether you’re aware of it, AI is built into many of the technologies you use on a regular basis. When Netflix recommends a show you might like, or Google suggests you book a trip online from the airport you usually fly from, artificial intelligence is involved.
In fact, ninety-one percent of businesses today want to invest in AI. While AI may seem extremely technical, bordering on the sci-fi level, it’s ultimately just a tool. And like any tool, it can be used for good or ill. Therefore, as AI takes on increasingly sophisticated tasks, it is important to ensure that an ethical framework is in place for its right use.
Let’s dive a little deeper into the key concerns surrounding ethics in AI, some examples of ethical AI, and most importantly, how to ensure ethics are respected when using AI in a business context.
What are ethics in AI?
AI ethics is a set of moral principles to guide and inform the development and use of artificial intelligence technologies. Because AI does things that would normally require human intelligence, it requires moral guidelines as much as human decision-making. Without ethical AI regulations, the potential for using this technology to perpetuate misconduct is high.
Many industries use AI heavily, including finance, healthcare, travel, customer service, social media, and transportation. Due to its ever-growing utility in so many industries, AI technology has far-reaching implications for every aspect of the world and therefore needs to be regulated.
Now, of course, different levels of governance are required depending on the industry and context in which AI is deployed. A robot vacuum cleaner that uses AI to determine a home’s floor plan is unlikely to drastically change the world unless it uses an ethical framework. A self-driving car that needs to recognize pedestrians, or an algorithm that determines what type of person is most likely to be approved for a loan, can and will profoundly impact society if ethical guidelines are not implemented.
By determining the top ethical concerns of AI, consulting examples of ethical AI, and considering best practices for using AI ethically, you can ensure your organization is on the right track to using AI.
What are the main ethical concerns of AI?
As previously mentioned, the key ethical concerns vary widely by industry, context, and the potential impact magnitude. But by and large, the biggest ethical issues when it comes to artificial intelligence are AI bias, concerns that AI could replace human jobs, privacy concerns, and using AI to deceive or manipulate. Let’s go through them in more detail.
Biases in AI
As AI takes on sophisticated tasks and does the heavy lifting, don’t forget that humans programmed and trained AI to perform those tasks. And people have prejudices. For example, if predominantly white male data scientists collect data on predominantly white males, the AI they design could replicate their biases.
But that’s actually not the most common source of AI bias. What is more common is that the data used to train the AI models can be biased. For example, if the data collected is only from the statistical majority, it is inherently biased.
A poignant example of this is Georgia Tech’s recent research into object recognition in self-driving cars. It was found that pedestrians with dark skin were hit about 5% more often than people with light skin. They found that the data used to train the AI model was likely the source of the injustice: the data set contained about 3.5 times as many examples of people with lighter skin, so the AI model could recognize them better. That seemingly small difference could have had deadly consequences when it comes to something as potentially dangerous as self-driving cars hitting people.
On the plus side, the good thing about AI and machine learning (ML) models is that the data set they’re trained on can be modified, and with enough effort invested, they can become largely unbiased. In contrast, it is not feasible to let people make completely unbiased decisions on a large scale.
AI replacing jobs
Almost every technological innovation in history has been accused of replacing jobs, and so far, it has never happened that way. As advanced as AI may seem, AI will not replace humans or their jobs any time soon.
Back in the 1970s, automatic teller machines (ATMs) were introduced, and people feared mass unemployment for bank employees. The reality was just the opposite. Since fewer cashiers were now required to operate a bank branch, the banks were able to increase the number of branches and the number of cashier jobs overall. And they could do it for less because ATMs took care of the simple, everyday tasks like processing check deposits and withdrawing cash.
This is reflected in what is currently happening with AI and its applications. An example is when AI was first introduced to understand and mimic human speech. People panicked as chatbots and intelligent virtual assistants (IVAs) replaced human customer service agents. The reality is that AI-powered automation can be extremely useful, but AI is unlikely to replace humans truly.
In the same way, ATMs took care of the mundane tasks that didn’t require human intervention, AI-powered chatbots and IVAs can take care of the simple, repetitive requests and even understand questions in natural language using natural language processing to provide helpful, contextual answers.
But the most complicated queries still require a human agent’s intervention. AI-powered automation may be limited in some ways, but the impact can be huge. AI-powered virtual agents reduce customer service fees by up to 30%, and chatbots can handle up to 80% of routine tasks and customer questions.
The future of AI is realistically one in which humans and AI-powered bots work together, with the bots handling the simple tasks and humans focusing on the more complex matters.
AI and privacy
Perhaps the most valid concern about ethics in AI is privacy. Privacy is recognized as a fundamental human right in the UN Declaration of Human Rights, and various AI applications can pose a real threat to it. Technologies such as surveillance cameras, smartphones, and the internet have made it easier to collect personal data. When companies aren’t transparent about why and how data is collected and stored, privacy is at risk.
Facial recognition, for example, is controversial for many reasons. One reason being how the images are recognized and stored by this technology. Being monitored without explicit consent is one of the AI applications many consider unethical. In fact, the European Commission banned facial recognition technology in public spaces until adequate ethical controls could be put in place.
The challenge in creating ethical privacy regulations around AI is that people are generally willing to give up some personal information to get some level of personalization. This is a big trend in customer service and marketing for a good reason.
80%
of consumers are more likely to purchase when brands offer personalized experiences.
Source: Epsilon
Some examples are grocery or drug stores that offer coupons based on past purchases or travel companies that offer deals based on consumers’ location.
This personal data helps AI deliver timely, personalized content that consumers want. Still, without proper data sanitization protocols, there is a risk that this data will be processed and sold to third-party companies and used for unintended purposes.
For example, the now-infamous Cambridge-Analytica scandal involved the political consulting firm that worked for the Trump campaign and which sold the private data of tens of millions of Facebook users. These third-party companies are also more vulnerable to cyberattacks and data breaches, which means your private information could fall even further into the wrong hands.
Somewhat ironically, AI is a great solution for data protection. AI’s self-learning capabilities mean that AI-powered programs can detect malicious viruses or patterns that often lead to security breaches. This means that by implementing AI, organizations can proactively detect attempts at data breaches or other types of data security attacks before information can be stolen.
Deception and manipulation using AI
Using AI to perpetuate misinformation is another major ethical issue. Machine learning models can easily generate factually incorrect text, meaning fake news articles or fake summaries can be created in seconds and distributed through the same channels as real news articles.
This is well illustrated by how much social media influenced the spread of fake news during the 2016 election, putting Facebook in the spotlight of ethical AI. A 2017 study by NYU and Stanford researchers shows that the most popular fake news stories on Facebook were shared more often than the most popular mainstream news stories. The fact that this misinformation was able to spread without regulation from Facebook, potentially affecting the results of something as important as a presidential election, is extremely disturbing.
AI is also capable of creating false audio recordings as well as synthetic images and videos where someone in an existing image or video is replaced with someone else. Known as “deepfakes,” these false similarities can be extremely persuasive.
When AI is used to intentionally deceive in this way, it puts the onus on individuals to discern what is real or not, and whether due to lack of skill or lack of will, we have seen that humans are not always able to determine what is real or not.
How to use AI ethically
With all the challenges AI brings, you might be wondering how to mitigate risk when implementing AI as a solution in your organization. Fortunately, there are some best practices for using AI ethically in a business context.
Education and awareness around AI ethics
Start by educating yourself and your peers about what AI can do, its challenges, and its limitations. Rather than scare people or completely ignore the potential of unethical use of AI, making sure everyone understands the risks and knows how to mitigate them is the first step in the right direction.
The next step is to create a set of ethical guidelines that your organization must adhere to. Finally, since ethics in AI is difficult to quantify, check in regularly to ensure goals are being met and processes are being followed.
Take a human-first approach to AI
Taking a human-first approach means controlling bias. First, make sure your data isn’t biased (like the self-driving car example mentioned above). Second, make it inclusive. In the US, the software programmer demographic is approximately 64% male and 62% white.
This means that the people who develop the algorithms that shape the way society works do not necessarily represent the diversity of that society. By taking an inclusive approach to hiring and expanding the diversity of teams working on AI technology, you can ensure that the AI you create reflects the world it was created for.
Prioritizing transparency and security in all AI use cases
When AI is involved in data collection or storage, it’s imperative to educate your users or customers about how their data is stored, what it is used for, and the benefits they derive from sharing that data. This transparency is essential to building trust with your customers. In this way, adhering to an ethical AI framework can be seen as creating positive sentiment for your business rather than restrictive regulation.
Examples of ethical AI
Although AI is a relatively new field, tech giants that have been in the field for decades and objective third parties that recognize the need for intervention and regulation have created a framework against which you can align your own organization’s policies.
Frameworks that inspire ethical AI
Several impartial third parties have recognized the need to create guidelines for the ethical use of AI and ensure that its use benefits society.
The Organization for Economic Co-operation and Development (OECD) is an international organization working to create better strategies for a better life. They created the OECD AI Principles, which promote the use of AI that is innovative, trustworthy, and respects human rights and democratic values.
The United Nations (UN) has also developed a Framework for Ethical AI that discusses how AI is a powerful tool that can be used for good but risks being used in a way inconsistent with UN values and runs counter to. It suggests that a set of guidelines, policies, or a code of ethics needs to be created to ensure that the use of AI at the UN is consistent with its ethical values.
Businesses and ethical AI
In addition to objective third parties, the biggest leaders in the space have also developed their own guidelines to use AI ethically.
Google, for example, has developed Artificial Intelligence Principles that form an ethical charter that guides the development and use of artificial intelligence in their research and products. And not only did Microsoft create Responsible AI Principles that they put into practice to guide all AI innovation at Microsoft, but they also created an AI business school to help other companies create their own AI support policies.
But you don’t have to be based in Silicon Valley to advocate for ethical AI. Some smaller AI companies have followed suit and are beginning to include ethics as part of their driving values.
There are also ways that for-profit businesses can be certified as ethical and sustainable, such as the B Corp certification that validates that an organization uses business as a force for good.
Several for-profit AI companies have joined the B Corp standards, showing that AI is forever an emerging trend. While this type of accreditation is not exclusive to AI companies, it does signal a commitment to act ethically, and more tech companies can and should seek certification.
AI for Good
When discussing ethics in AI, the focus is more on the possible negative AI use cases and impacts, but AI is really doing a lot of good. It’s important to remember that AI technology is not just a potential problem but a solution to many of the world’s biggest problems.
There is AI to predict the effects of climate change and suggest actions to address it; robotic surgeons can perform or assist in operations that require more precision than a human can handle.
AI-assisted farming technology is increasing crop yields while decreasing crop yield waste. There are even non-profit organizations like AI for Good dedicated solely to making AI a force with global impact. And as natural as it may seem, AI makes simple, everyday tasks like navigating traffic or asking Siri about the weather easier.
AI gets better with the right ethics
Artificial intelligence has become a powerful tool woven into your everyday life. Almost all of your services and devices use AI to make your life easier or more efficient. And while it is, of course, possible to use AI maliciously, the vast majority of companies have ethical principles in place to mitigate the negative effects where possible.
As long as best practices are followed, AI has the potential to improve virtually every industry, from healthcare to education and beyond. It’s up to the people creating these AI models to ensure they keep ethics in mind and constantly question how what they create can benefit society as a whole.
When you think of AI as a way to scale human intelligence rather than replace it, it doesn’t seem so complex or scary. And with the right ethical framework, it’s easy to see how it will change the world for the better.
Integrate artificial intelligence into your everyday functions and automate tasks with artificial intelligence software.
[ad_2]
Source link