close
close

Association-anemone

Bite-sized brilliance in every update

Unmasking Bias in Artificial Intelligence: Challenges and Solutions
asane

Unmasking Bias in Artificial Intelligence: Challenges and Solutions

The recent advancement of Generative AI has seen an accompanying boom in enterprise applications in various industries, including finance, healthcare, transportation. The development of this technology will also lead to other emerging technologies such as cyber security defense technologies, advances in quantum computing and innovative wireless communication techniques. However, this explosion of next-generation technologies comes with its own set of challenges.

For example, the adoption of AI may enable more sophisticated cyber attacks, memory and storage bottlenecks due to increased computing power and ethical concerns of biases presented by AI models. The good news is that NTT Research has proposed a way to overcome bias in deep neural networks (DNNs), a type of artificial intelligence.

This research is a significant breakthrough, given that unbiased AI models will contribute to employment, the criminal justice system, and health care when they are not influenced by characteristics such as race, gender. In the future, discrimination has the potential to be eliminated through the use of these types of automated systems, thereby improving industry-wide DE&I business initiatives. Ultimately, AI models with unbiased results will improve productivity and reduce the time it takes to complete these tasks. However, few companies have been forced to shut down their AI-powered programs due to the technology’s biased solutions.

For example, Amazon discontinued the use of a hiring algorithm when it discovered that the algorithm showed a preference for applicants who used words like “executed” or “captured” more often, which were more prevalent on men’s resumes. Another blatant example of bias comes from Joy Buolamwini, one of the most influential people in AI in 2023, according to TIMEin collaboration with Timnit Gebru of MIT, revealed that facial analysis technologies demonstrated higher error rates in evaluating minorities, particularly minority women, potentially due to inadequate unrepresentative training data.

Recently, DNNs have become ubiquitous in science, engineering, and business, and even in popular applications, but they sometimes rely on spurious attributes that can convey bias. According to one MIT study In recent years, scientists have developed deep neural networks capable of analyzing large amounts of input, including sounds and images. These networks can identify common features, allowing them to classify target words or objects. Currently, these models are at the forefront of the field as primary models for replicating biological sensory systems.

Hidenori Tanaka and three other scientists proposed to overcome the limitations of naïve fine-tuning, the status quo method of reducing errors or “loss” of a DNN, with a new algorithm that reduces a model’s reliance on bias-prone attributes.

They studied neural network loss landscapes through the lens of mode connectivity, the observation that minimizers of neural networks recovered by training on a data set are connected by simple paths of reduced loss. Specifically, they asked the following question: Are minimizers that rely on different mechanisms to make their predictions connected by simple loss-reduced pathways?

They found that naïve fine-tuning cannot fundamentally change a model’s decision-making mechanism because it requires moving to a different valley in the loss landscape. Instead, you must drive the model over the barriers that separate the “sinks” or “valleys” of low loss. The authors call this corrective algorithm Connectivity-Based Fine-Tuning (CBFT).

Prior to this development, a DNN that classifies images such as a fish (an illustration used in this study) used both object shape and background as input parameters for prediction. Its loss minimization pathways would therefore operate in different mechanistic ways: one relying on the legitimate attribute of shape and the other on the spurious attribute of background color. As such, these modes would lack linear connectivity or a simple low-loss path.

The research team takes a mechanistic lens on mode connectivity by considering two sets of loss-minimizing parameters, using object backgrounds and shapes, respectively, as input attributes for prediction. And then they asked, are such mechanistically different minimizers connected by low-loss paths in the landscape? Does the difference in these mechanisms affect the simplicity of their connectivity pathways? Can we exploit this connectivity to switch between minimizers that use the desired mechanisms?

In other words, deep neural networks, depending on what they picked up during training on a particular data set, can behave very differently when you test them on another data set. The team’s proposal boiled down to the concept of shared similarities. It builds on the earlier idea of ​​connectivity mode, but with a twist – it considers how similar mechanisms work. Their research led to the following amazing discoveries:

  • minimizers that have different mechanisms can be connected in a rather complex, non-linear way
  • when two minimizers are linearly connected, it is closely related to how similar their models are in terms of mechanisms
  • simple fine-tuning may not be enough to get rid of unwanted features picked up during previous training
  • if you find regions that are linearly disconnected in the landscape, you can make effective changes to the inner workings of a model.

While this research is a major step in realizing the full potential of AI, the ethical concerns surrounding AI can still be an uphill battle. Technologists and researchers are working to combat other ethical weaknesses of artificial intelligence and other large language models, such as privacy, autonomy, accountability.

AI can be used to collect and process large amounts of personal data. Unauthorized or unethical use of this data can compromise individuals’ privacy, leading to concerns about surveillance, data breaches, and identity theft. Artificial intelligence can also pose a threat when it comes to the liability of their autonomous applications, such as self-driving cars. Establishing legal frameworks and ethical standards for responsibility and accountability will be essential in the coming years.

In conclusion, the rapid growth of generative AI technology holds promise for various industries, from finance and healthcare to transportation. Despite these promising developments, ethical concerns about AI remain substantial. As we navigate this transformative era of AI, it is vital that technologists, researchers, and policymakers work together to establish legal frameworks and ethical standards that will ensure the responsible and beneficial use of AI technology for years to come. Scientists at NTT Research and the University of Michigan are one step ahead of the game with their proposal for an algorithm that could eliminate bias in AI.