Tech Beat by Namecheap – 9 June 2023
The pervasive use of AI in software, customer support, and other commercial uses has sparked calls for increased transparency from companies that rely on it. With AI becoming an integral part of many companies’ operations, users often interact with AI without realizing it, potentially leading to frustration or misunderstanding. Issues like AI bias, problematic AI-generated content, security concerns, and the impersonal nature of AI interactions further underscore the need for transparency.
Check out this week’s feature, in which we dig into this issue of AI and brand transparency.
In other news
- App-killing API prices spark Reddit revolt. The Reddit community is rising up in response to the platform’s planned API pricing, which threatens to shutter popular third-party Reddit clients for mobile devices. As reported in The Register, major subreddits, including r/videos, r/bestof, and r/lifehacks, have pledged to go offline for at least 48 hours on June 12, with some potentially closing permanently unless the issue is addressed. The controversy ignited when Christian Selig, developer of the Apollo app, revealed he would have to cease development due to the prohibitive costs, which he said would cost $20 million annually. Other developers of third-party apps echoed Selig’s concerns, with some hinting that Reddit’s move appears to be an attempt to eliminate these third-party apps entirely. Volunteer Reddit admins claim the official Reddit app lacks vital administration tools, forcing them to rely on these third-party apps.
- Homeland Security is using AI to spy on various groups. According to Vice, Customs and Border Protection (CBP) has been using Babel X to check the personal information of refugees, US citizens, and asylum seekers since 2019. When an agent inputs any information relating to an individual in the AI tool, it returns a huge amount of related information, which may include IP addresses, locations, employment history, or social media posts. The tool comes from Babel Street and provides access to information in 200 languages from the Internet, the dark web, and the deep web. This has raised privacy concerns from groups that include the American Civil Liberties Union (ACLU).
- Solar power from space has been transmitted to Earth for the first time. Scientists from the California Institute of Technology’s Space Solar Power Project have revealed they beamed solar power from space to Earth without wires. According to Gizmodo, this is the first time space-based solar energy has been successfully transmitted via microwaves. Researchers used a small prototype, the Microwave Array for Power-transfer Low-orbit Experiment, or MAPLE, currently on board the in-orbit Space Solar Power Demonstrator, which launched in January. They believe that this development could democratize access to energy globally. Because it doesn’t need energy transmission infrastructure on the ground to receive this power, it could be sent to any region in the world that needs it.
- The FTC charged Amazon Ring and Alexa with a slew of privacy violations. A complaint filed by the Department of Justice on behalf of the Federal Trade Commission (FTC) has accused two of Amazon’s most popular IoT products of deceiving users about its data deletion practices. A press release outlines numerous violations, including keeping children’s recordings and geolocation data indefinitely. Although Amazon assures customers that they can delete such sensitive information from the apps, the FTC found that the tech giant kept the data for years. Among other purposes, Amazon uses recordings to improve Alexa’s speech recognition capabilities, as children’s speech patterns tend to differ from adults. As a result of this ruling, Amazon must pay a $25 million civil penalty and carry out several other provisions, like creating and implementing a privacy program related to using geolocation information.
- High-price phone fiasco. In an unusual incident that’s made waves in India, a government food inspector named Rajesh Vishwas was taking a selfie when he dropped his pricey Samsung smartphone into the Kherkatta Dam in the central Indian state of Chhattisgarh. After local divers failed to recover the phone, Vishwas had a diesel pump drain millions of liters of water over several days, enough to irrigate six square kilometers of farmland. As the BBC reported, given the water shortages faced by several regions in the country, Vishwas’s actions sparked outrage and led to his suspension. The local government found Vishwas had illegally diverted 4.1 million liters of water (880,000 gallons) and fined him 10,000 rupees ($640) and the cost of the wasted water. And as for the phone? Despite all efforts, it was recovered but was too waterlogged to work. Maybe next time, he’ll use a selfie stick?
Previously in Tech Beat: Voluntary surveillance and the high price for high tech
The rise of connected home devices and health trackers is leading to an unprecedented increase in “luxury surveillance”. Individuals are providing tech companies with their personal data in exchange for perceived benefits. Devices like Amazon Halo, Fitbit, Ring, Eufy, Alexa, and Siri offer benefits ranging from monitoring sleep cycles and steps taken, to enhancing home security and automating everyday life. However, these conveniences come with the cost of providing private companies (and unknown third parties) with intimate biometric information and personal details.
Such data is susceptible to abuse, a risk that is already becoming a reality. For instance, some companies are adopting employee fitness trackers to monitor their workers, while home surveillance footage has been provided to law enforcement without user permission.
Read more about this issue in our recent article, Luxury surveillance and why it should concern you.
Tip of the week: Make AI work for you
With the extensive media attention paid to generative AI tools this year, sometimes it can be difficult to know exactly how to respond. Is this the beginning of a revolution or the end of modern civilization as we know it? We can all imagine bright or nightmarish visions of the future, but the best approach is to be practical about our use of technology in the present.
Here are some basic principles to keep in mind when applying AI to your business practices:
- Check everything. Whether that’s fact-checking for research purposes or code writing for software, AI tools like ChatGPT can’t be left to do it all without human oversight.
- Apply quality training. Just like employees, machine learning algorithms need to be trained properly. This means using the right data for training and making sure they’re tested and validated.
- Monitor your input data. Make sure you’re using data that’s relevant, reliable, and without any biases.
- Stay updated. The world is moving quickly, so that means everything from your machine learning models to your cybersecurity tools needs to be up to date.