In the latest episode of our new online weekly debate series we looked at one of the pressing topic of 21st century business: artificial intelligence
A first-of-its-kind in the UAE, Crossfire explores a range of contrasting views on some of the region’s key issues, tapping into some of the everyday conversations generating heated debate across the region’s offices and social platforms.
A weekly show from ABTV, the online television arm of Arabian Business, the format features two speakers with opposing views on a key topic, with the debate running for around 20 minutes. The discussion is moderated by ABTV’s Shruthi Nair, followed by an interactive Q&A session with a live in-studio audience at ITP’s Dubai HQ.
On the third episode of the series we looked at artificial intelligence and whether the algorithms behind the technology can be biased. Did you know, for example, that AI programs have been known to decide that a man deserves to apply for a higher paying job than a woman? Where do these biases come from?
There are a number of factors that can be blamed for the bias in artificial intelligence that we see today. Globally, only 12 percent of leading machine learning researchers are women. Since inventions have been historically known to favour their creators, could this be one of the reasons?
“There’s three core foundations to AI. There’s maths and the algorithms, data, and then processing power. If one of those three things is corrupted by any kind of bias, then fundamentally the output is corrupted,” Harvey Bennett, co-founder of Searchie, an AI-based recruitment platform, argued.
However, Reaktor’s principal data scientist, Pavel Nesterov, disagreed with this on a number of fronts: “Most good developers have a good education, and we should not forget about academic integrity, which is the commitment to, and demonstration of, honest and moral behaviour in an academic setting. If we don’t agree that academic integrity exists then the whole of science has failed,” he said.
The second and the most scientific reason for human bias bleeding into artificial intelligence and other systems is data sets. AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender or ideological biases. Many AI systems will continue to be trained using flawed data, making this an ongoing problem, according to experts.
There are a number of examples supporting this. Among the most notorious is the case of the criminal sentencing algorithm in Florida. In May 2016 the US news organisation ProPublica reported that the system was racially biased.
“You’ve got poor people who have been sentenced to extreme penalties and punishments because of the way that the judicial system is structured, then that is going to bleed into the results that you get,” Bennett remarked.
In another case, Google’s online advertising system showed high-income jobs to men much more often than to women. There have also been multiple cases of bias in financial algorithms for predicting loan repayment.
Bennett and Nesterov were not in agreement when it came to these cases as examples of AI bias and put forth tough arguments to support their points.
“Kate Crawford, co-director of the AI Now Institute at New York University, used the CEO image search in Google Image Search to highlight the complexities involved. Google returns 11 percent of results with women, while the real ratio stands at 27 percent. How then, does one determine the “fair” percentage of women that the algorithm should show? Is it the percentage of women CEOs we have today? Or might the “fair” number be 50 percent, even if the real world is not there yet? Can we fix all possible search queries to give a real percentage?” Nesterov asked, starkly highlighting the complexity of the issue.
However, one thing that everyone agrees on is that the problem is with data sets and that the solution would be to fix these data sets. But how?
“Whenever we fix data, we violate one rule of machine learning – we should not change output of machine learning system. A business should understand that whenever they fix bias in the system, they lose money,” Nesterov noted.
There appears to be a tug of war between fairness and accuracy, where businesses would prefer accuracy and society at large demands fairness. While Bennett thinks that it is the responsibility of data scientists to clean the data imputs, Nesterov disagrees.
“It is not my responsibility to remove bias of data because it is not correlated with the goal of a data scientist. The goal is optimisation. There needs to come a point where there is a synergy between all parties – science, government, and businesses,” he argued.
Despite establishing that AI bias exists, there’s no absolute solution to fix it yet. As fairness and accuracy haven’t yet managed to go hand-in-hand, there still is a mass-scale shift happening within organisations, communities, and countries.
Artificial Intelligence could contribute $15 trillion to global economy by 2030 and 25 percent of senior executives say they plan to reimagine their businesses with AI by 2021. Another 54 percent say they will use AI to transform their businesses. But will adopting AI before fixing these problems or just end up polarising society even more?
Do you want to know who ‘won’ the debate and convinced our live audience? Check out the full video online.
Do you want to be an audience member at one of our upcoming show? Below is a selection of our upcoming debates:
We film the series every Thursday afternoon from 3pm to 4pm.
Email: email@example.com for more information on Crossfire, ABTV’s newest weekly debate show.
Catch up with all the latest ABTC videos at www.youtube.com/user/arabianbusiness