October 11, 2019

The Ethics of AI in 2019: The Good, The Bad, and the Ugly Sides of Rapid AI Development

AI has revolutionised various fields. Many industries are rapidly adopting AI to help their workforce in everyday operations, and it is predicted that 1.5 million jobs in England will be replaced by the technology. While in some cases AI has worked to benefit people and relieve humans of repetitive and mundane tasks, the rapid advancement of AI has also left ethically questionable ‘solutions’ that are worth discussing. Below, we’ll be looking at various applications of AI in workforce management and beyond. We’ll be covering the benefits as well asking whether AI in the workforce is ethically irresponsible with examples from the UK and around the world.



The Good


AI and employee coaching


In this rather fast-paced world that we live in, managers can’t be in multiple places at once. While employees need mentorship and coaching to accomplish their tasks, it is impossible for workplace managers to always be present. Therefore, the introduction of AI coaching tools proved a big help for managers and their employees. AI coaching tools begin by observing how different employees work on specific tasks. Similar to how AI chatbots are crafted to the user experience, these AI coaches adjust to differences in how each employee tackles their work.


A specific example of this is the recently launched Cogito, a workplace coaching tool that enhances employee task completion capabilities and efficiency optimisation. It combines AI with behavioural science, aiding employees in offering customers improved telephonic support. Professionals who answer customer queries need software that can guide them through a phone call that go in multiple directions. The AI coaching tool can provide real-time tips for professionals on the front line of customer service.


AI as a revolutionary tool for workforce management


AI also helps in managing employees who are often deployed in the field, but continue to require monitoring and assistance. ITProPortal reports that field service management (FSM) is being used by companies to make better use of an employee’s time. The site points to UK vehicle glass repair company Belron, who used automatic schedule optimisation to increase the number of same day repairs by 63% as well as decrease a technician’s travelling time by 20%. This provides a better service to the customers and keeps business costs down.


This technology is being implemented across industries with British fleet companies also using it to provide closer management for their drivers. A feature on the benefits of commercial GPS technology by Verizon Connectshows how it can support those on the road through providing increased efficiency and cost savings, improved driver safety and real-time driver insights. This allows fleet operators to know where each driver is and see how their efficiency could be improved. Employee management software tailored to particular professions allow for improved safety and planning in remote workplaces. Moreover, it supports workload balancing, the reduction of overtime, and the necessary coaching that, as mentioned, a manager isn’t always capable of giving to each employee especially when they’re on the road.




The Bad


AI revealing human biases in recruitment


AI is rapidly being adopted in the field of employee recruitment. Instead of employees from the HR department having to go through hundreds of thousands of resumes, software can easily scan through the resumes while looking for particular qualities to find the perfect match. While AI has the capability of processing data at a rate that’s far beyond that of humans, it can’t always be trusted to be neutral and fair. AI in recruitment often reveals human biases.


Amazon is perhaps the most popular case when it was revealed that its revolutionary hiring tool was not rating candidates in a gender-neutral way. More specifically, Amazon’s system taught itself that male candidates were preferable, reflecting male dominance in the tech industry. However, gender bias wasn’t the only issue. The algorithm also learned to assign little weight to skills that were common across IT candidates like the ability to write code. Instead, the machine favoured applicants who described themselves using verbs commonly found on male engineers’ resumes like “captured” and “executed”.


Despite admitted faults in the system, many companies continue to follow suit. Goldman Sachs created its own resume analysis tool that takes it a step further and tries to match candidates with the division where they would supposedly be the best fit. The world’s largest professional network, LinkedIn, offers employers algorithmic rankings of applicants according to their fit for job postings on the site. However, it’s important to note that efforts are being made to reduce bias to try and mitigate the issue. AI is holding up a mirror to humanity and exaggerating inequality, racism and sexism in some areas mainly due to coders not considering the wider implications of tech. Various applications have sprung up in response to this bias, like Etiq AI which is helping to diagnose and minimise discrimination and bias in different AI applications.


Data privacy and security


With AI taking off, the demand for data is greater than ever. Much of this data comes from consumers, some without their explicit consent. Thus, another issue brought about by AI is that of data and privacy. With the amount of information that companies who utilise AI have on consumers, this could be catastrophic. A study discussed by Forbes found that the average cost of a data breach for a large company is $3.86 million (£3.10 million) globally. Many are calling for privacy and transparency when it comes to how AI applications leverage information, but the pace of innovation seems to be too fast for the law to keep up. Hopefully in the next few years, more mechanisms will be in place to ensure that the data being captured and used by AI is increasingly more secure and private.




The Ugly


Beyond recruitment: prejudice in AI


As machines are getting smarter, they’re also getting better at absorbing implicit human biases. Beyond recruitment and workforce management, the ugly side of AI is rearing its ugly head. PredPol is an algorithm being use by U.S. police that predicts when and where crimes will take place, with the aim of reducing human bias in policing. However it was discovered that the software could lead police to unfairly target certain neighbourhoods with a high population of racial minorities, regardless of crime rate.


Similarly, facial recognition is also being used in law enforcement and similar fields. Three of the top gender-recognition AIs worldwide could correctly identify a person’s gender 99% of the time, but only for white men. When it comes to dark-skinned women, the accuracy rate dropped to 35%.


Knowing the ethical considerations of rapid AI development is necessary in understanding the shifting tides of industries such as hiring and workforce management. However, it’s important to note that many of the biases we see AIs display are a reflection of human reactions. Understanding the role of humans in the shortcomings of AI is the first step in solving the problem.


Environmental impact of tech


AI algorithms work tirelessly to find connections that can translate to more efficient processes. However, these predictive capacities do come at a cost. Training artificial intelligence is a highly energy intensive process. A new study suggests that the carbon footprint of training a single AI is equivalent to 284 tonnes of carbon dioxide, which is about five times the lifetime emissions of an average car. With more and more applications of AI in the real world, the environmental implications caused by the energy used to train this software will become a growing issue in the coming years.



The Takeaways


The rise of artificial intelligence has spurred numerous innovations that have furthered social good. AI programs have helped people find the right jobs, and have supported their job development. These programs have also helped social workers identify vulnerable individuals and have helped consumers have a more pleasurable shopping experience.


However, it cannot be discounted that these advantages do have a flip side. AI can be trained to become biased towards a certain sex or race. It can also acquire more private information than we are willing to share. More than that, it has the potential to increase our global carbon emissions even more. Given this, we are currently at a crossroads when it comes to AI. We need to decide whether we allow more of “the good” or “the bad” to dominate the technology. Understanding both sides of the picture can help us in determining where we take AI from here, for the betterment of everyone’s future.



By: Jamie Briant written for Snap Out

Source: https://medium.com/snapout/the-ethics-of-ai-in-2019-the-good-the-bad-and-the-ugly-sides-of-rapid-ai-development-e7e7ff63bde0

More From Blog

January 18, 2023

Top 10 Trustworthy IT Outsourcing Companies In Thailand

Nowadays, Thailand’s IT industry is rapidly growing and attracts numerous investors from foreign countries. To optimize the whole operation process, many software companies in Thailand have been searching for reliable companions to collaborate with in the long run, which in this case, are the outsourcing vendors. To improve your experience while outsourcing to Thailand, we […]

January 11, 2023

10 Common Risks in Software Development | How to Minimize?

The term “no one is immune to risks” is no longer true in today’s world. Every industry sector and market niche has its own pitfalls and bottlenecks that must be taken into account and IT is no exception. According to Statista, around $5473 million were spent worldwide for handling integrated risk management in the IT […]

January 9, 2023

10 Best Programming Languages for Finance & FinTech

Programming is a process of writing a language to make a computer perform certain instructions. This process is familiar to following the cooking recipe with an order list of requirements and actions. The “recipe” to build financial mobile apps nowadays is similar and more approachable as there are many programming languages available for coders.  Finance […]

January 9, 2023

Outsourcing in Vietnam: Data-backed Opportunities & Challenges [Infographics]

Outsourcing is a rising sector in Vietnam, with the IT outsourcing industry alone projected to grow 13.47% by 2027. Affordability and a large pool of tech talents are among prominent reasons the country attracts global leaders looking for ways to cut costs effectively & manage their teams flexibly. Promising as it seems, there are plenty […]

January 6, 2023

What is MVP in Software Development? [Detailed Explaining]

We’re sure you’ve heard of the term MVP. But here, we are not talking about the most valuable players, we are talking about computers and programs. In software development, we’re talking about an MVP, sometimes known as a “minimum viable product,” a step where you validate the problem and test the solution. What is MVP […]

December 30, 2022

Fintech App Development 101

Technologies integrated finance are a crucial part of our hustle and bustle life. With up-to-date and advanced features, fintech apps help us save more time and enjoy diversified functionalities.  Imagine without digital fintech app services, Covid-19 pandemic could seriously impacted our daily lives and individuals can not handle their financial needs.  Therefore, this article will […]