Go To Namecheap.com
Hero image of Can AI be trusted? The need for transparency
News, Tech Roundup

Can AI be trusted? The need for transparency

Artificial intelligence is suddenly everywhere we turn. While many of the conversations focus on generative AI, like ChatGPT and Bing, few people are talking about the impact of companies using AI in other ways. According to a 2022 survey, nearly 80% of US companies use AI in multiple ways, and consumers need a clear understanding of how these uses of AI may impact them. 

Algorithms are everywhere these days, making decisions often without human intervention. AI also generates content and automates various processes. Because of the rapid expansion of the use of these automated systems, brand transparency and accountability have become increasingly important.

Let’s take a look at this issue of AI use and brand transparency, and explore why it should matter both to the brands that use AI and their customers. 

Why transparency matters

When you contact a company or submit information to them, you probably want to interact with actual human beings. When you read a news article or documentation, you need to be able to trust the content and know that it’s accurate and not harmful.

Despite this, as AI technology improves and becomes more widely available, many companies use it to handle initial conversations, filter applications, and make decisions. It’s faster and more cost-effective, and AI workers don’t need lunch breaks or health insurance. 

The issue is that when humans aren’t involved, your unique situation, qualifications, or needs may not be fully appreciated. Furthermore, some AI systems may unintentionally operate with a cultural, racial, gender, or language bias that could cause automatic decisions to be unfair or inappropriate, or invent facts or misinterpret data.

When companies aren’t open about their use of automatic algorithms in these situations, it becomes nearly impossible to hold these brands accountable, address unfair practices, or get resolutions for other issues that an impersonal AI system could have caused.

The problem of bias

AI systems are only as good as the data used to train them. Because most AI platforms and providers don’t disclose what kind of data they use and where they obtained it, it becomes impossible for brands that use these systems—much less the rest of us—to know if these automatic systems are making reasonable judgment calls.

As the Mozilla Foundation stated in their recent report on AI transparency, 

“Understanding the source of data – and how it was collected and processed – is key for obtaining trustworthy output. Bias can enter data through the process in which the data was collected, or even in the process through which it was generated, in which case the data reflects biases that already exist in society.”

Already we’ve explored examples of how biased data can influence the output of generative AI systems like ChatGPT and DALL-E or Midjourney. It’s not a huge leap to imagine the kinds of problems bias could cause in other situations like healthcare or hiring decisions. 

Glitchy code and security issues

AI use in software introduces another issue that consumers should have a right to know about.

When companies turn to AI to write code, it can lead to problems for both the company and its unsuspecting users. Generative AI can create glitchy code or software with security holes

Furthermore, some tech companies are using AI output to build software products like virtual assistants to manage email and calendars, or to book flights. Some of this software (or the AI it used to build it) may copy malicious executable code hidden on websites and then include it within the software or training data. Such incidents of  “indirect prompt injection” can send personal data or credit card information back to the hacker. This hidden code could even be distributed via email, creating a virus that could ostensibly impact thousands of people with very little effort.

According to Arvind Narayanan, a computer science professor at Princeton University, 

“Essentially any text on the web, if it’s crafted the right way, can get these bots to misbehave when they encounter that text.”

To explore this issue further, check out our article, ChatGPT provides a boost to cyberscams.

With this in mind, when you download an app or install new software, it would be helpful to know that a human has reviewed all of the code and done proper security checks.

Relying on AI can lead to problems

To better illustrate this issue, let’s look at a few areas where an impersonal AI bot or algorithm could be a problem.

  • Healthcare. Hospitals, insurance providers, and other medical systems that turn to AI to make healthcare decisions or diagnoses could lead to major advances in care — or, given the bias in many systems, may ultimately make it more difficult for some people to access the care they need.
  • Recruitment automation. Human resources offices and recruiters have long used automated systems to sort and filter resumes and job applications using Applicant Tracking Systems (ATS). However, more of this process may be in the hands of AI than ever before. Forbes reports that over half of US companies currently use automation and envision AI making the hiring process easier. However, a PEW Research Center survey showed that 71% of Americans oppose the use of AI in final hiring decisions, and 66% would avoid applying to companies that use AI as part of the process. 
  • Customer service chatbots. AI systems that mimic human representatives can be difficult to distinguish from a real person who can understand the nuance of your situation. According to a recent survey, 60% of people using chatbots find their interactions disappointing. Worse, 32% of us can’t even tell when we’re interacting with a chatbot in the first place. 
  • Housing. Many companies now run rental applications through AI systems to determine if a renter is a good risk or not. But the AI algorithms may have a racial bias, making a system that should be color-blind anything but.
  • News media. Already many media companies have announced that they will be using AI to generate some or all of their news content. Setting aside the human cost to those decisions (which is bad enough), real questions arise about the accuracy of news content if there isn’t significant human oversight of everything an AI generates. 

What should companies consider before using AI services?

According to a survey by Cogito, an AI services company, 43% of respondents would have a more favorable opinion of a company if the brand was transparent about its use of AI, as well as what kind of data the AI collected on users and how it was used. VentureBeat, which reported on the survey, concluded that with the rapid adoption of AI technology, 

“AI creators and the industry at large must be more explicit about the technology’s role and the support it provides to create a more productive, trusting, and open-minded future of work, where AI and humans can work in symbiosis.”

Some tech companies have already learned their lesson and have scaled back on AI implementation, recognizing that algorithms can’t always make the right choices when serving out content. For example, Meta was criticized for showing anti-semitic content, and nearly everyone on Facebook has fallen victim to the AI moderation flagging innocuous content as harmful or dangerous. 

As Navrina Singh, founder and CEO of Credo AI, a company promoting responsible AI, told Fast Company, 

“Companies are racing to AI for its competitive advantages. But to reap those rewards, it’s critical that you are building artificial intelligence with human-centered values. Governance is an aid to innovation; we need to bring these two concepts together in artificial intelligence.” 

Ethical companies will disclose how they use AI

AI transparency is not just a nice-to-have goal but is necessary in our increasingly AI-centric world. Companies wielding AI as a tool must come to recognize that AI is more than efficiency and cost savings. It’s crucial that brands disclose how AI uses data and explain how customers can request a human review if decisions go awry. 

Transparency isn’t just an ethical choice — it’s also a strategic one. It has the power to shape consumer perceptions, build trust, and drive brand loyalty. After all, who wouldn’t prefer a brand that’s honest about its AI usage, data collection, and decision-making?

Was this article helpful?
0
Get the latest news and deals Sign up for email updates covering blogs, offers, and lots more.
I'd like to receive:

Your data is kept safe and private in line with our values and the GDPR.

Check your inbox

We’ve sent you a confirmation email to check we 100% have the right address.

Help us blog better

What would you like us to write more about?

Thank you for your help

We are working hard to bring your suggestions to life.

Jackie Dana avatar

Jackie Dana

Jackie has been writing since childhood. As the Namecheap blog’s content manager and regular contributor, she loves bringing helpful information about technology and business to our customers. In her free time, she enjoys drinking copious amounts of black tea, writing novels, and wrangling a gang of four-legged miscreants. More articles written by Jackie.

More articles like this
Get the latest news and deals Sign up for email updates covering blogs, offers, and lots more.
I'd like to receive:

Your data is kept safe and private in line with our values and the GDPR.

Check your inbox

We’ve sent you a confirmation email to check we 100% have the right address.

Hero image of Namecheap’s 12 favorite Private Email featuresCan AI be trusted? The need for transparency
Next Post

Namecheap’s 12 favorite Private Email features

Read More