The biggest challenge to internet health, Mozilla says, is AI power disparity and damage
The biggest challenge to the health of the internet is the power disparity between who benefits from AI and who suffers from it, according to Mozilla new 2022 Internet Health revealed.
Once again, this new report puts AI in the spotlight on how businesses and governments use the technology. Mozilla’s report examined the nature of the AI-driven world by citing real-life examples from different countries.
TechRepublic sat down with Solana Larsen, editor of Mozilla’s Internet Health Report, to shed some light on the concept of “responsible AI from the start,” black box AI, the future of regulations, and how some AI projects lead by example.
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
Larsen explains that AI systems should be built from the start with ethics and responsibility in mind, not added at a later date when damage begins to show.
“As logical as it sounds, it really doesn’t happen enough,” Larsen said.
According to Mozilla’s findings, centralizing influence and control over AI does not work to the benefit of the majority of people. Considering the dimensions that AI technology is taking on, as AI is adopted around the world, the issue has become a major concern.
Market watchThe AI Disruption Report reveals just how important AI is. 2022 opened with over $50 billion in new opportunities for AI companies, and the industry is expected to grow to $300 billion by 2025.
The adoption of AI at all levels is now inevitable. Thirty-two countries have already adopted AI strategies, more than 200 projects with more than $70 billion in public funding have been announced in Europe, Asia and Australia, and startups are raising billions in thousands of deals. worldwide.
More importantly, AI applications have moved from rule-based AI to data-driven AI, and the data used by these models is personal data. Mozilla recognizes the potential of AI, but warns that it is already causing damage daily around the world.
“We need AI builders from diverse backgrounds who understand the complex interplay of data, AI, and how it can affect different communities,” Larsen told TechRepublic. She called for regulations to ensure AI systems are designed to help, not harm.
Mozilla’s report also focuses on the problem of AI data, where large and frequently reused datasets are put to work, although it does not guarantee the results achieved by smaller datasets, especially designed for a project.
The data used to train machine learning algorithms often comes from public sites like Flickr. The organization warns that many of the most popular datasets are made up of content culled from the internet, which “overwhelmingly reflects words and images that distort the English, American, white and male gaze “.
Black Bock AI: demystifying artificial intelligence
The AI seems to get away with much of the harm it causes thanks to its reputation for being too technical and advanced for people to understand. In the AI industry, when an AI uses a machine learning model that humans cannot understand, it is known as Black Box AI and labeled for its lack of transparency.
Larsen says that to demystify AI, users must have transparency into what the code does, what data it collects, what decisions it makes, and who benefits.
“We really have to reject the idea that AI is too advanced for people to have an opinion unless they’re data scientists,” Larsen said. “If you take damage from a system, you know something about it that maybe even its own designer doesn’t.”
Companies like Amazon, Apple, Google, Microsoft, Meta, and Alibaba top the list of reaping the most benefits from AI-powered products, services, and solutions. But other sectors and applications like the military, surveillance, computer propaganda – used in 81 countries in 2020 – and disinformation, as well as AI-related bias and discrimination in the healthcare, finance and law also raise red flags for the damage they create.
Regulating AI: from talk to action
Big tech companies are known to often push back on regulations. Military and government AI also operate in an unregulated environment, often clashing with human rights and privacy activists.
Mozilla believes that regulations can be safeguards for innovation that help foster trust and level the playing field.
“It’s good for businesses and consumers,” Larsen says.
Mozilla supports regulations such as DSA in Europe and closely follows EU AI law. The company also supports bills in the United States that would make AI systems more transparent.
Data privacy and consumer rights are also part of the legal landscape that could help pave the way for more responsible AI. But regulation is only part of the equation. Without enforcement, regulations are just words on paper.
“A critical mass of people calling for change and accountability, and we need AI builders who put people before profit,” Larsen said. “A lot of AI research and development right now is big tech-funded, and we need alternatives too.”
SEE: Metaverse Cheat Sheet: Everything You Need to Know (Free PDF) (TechRepublic)
Mozilla’s report linked AI projects causing harm to multiple companies, countries and communities. The organization cites AI projects that affect gig workers and their working conditions. This includes the invisible army of low-wage workers who train AI technology on sites like Amazon Mechanical Turk, with average wages as low as $2.83 an hour.
“In real life, again and again, the harms of AI disproportionately affect people who are not advantaged by global systems of power,” Larsen said.
The organization is also actively acting.
An example of their actions is that of Mozzila RegretsReporter browser extension. It turns everyday YouTube users into Youtube watchdogs, outsourcing the operation of the platform’s recommendation AI.
Work with tens of thousands of users, Mozilla’s investigation found that YouTube’s algorithm recommends videos that violate the platform’s own policies. The investigation had good results. YouTube is now more transparent about how its recommendation AI works. But Mozilla does not intend to stop there. Today, they continue their research in different countries.
Larsen explains that Mozzila thinks it’s paramount to shed light and document AI when it’s operating in dodgy conditions. Additionally, the organization calls for dialogue among tech companies in an effort to understand problems and find solutions. They are also contacting regulators to discuss which rules to use.
AI leading by example
While the Mozilla 2022 Internet Health Report paints a rather gloomy picture of AI, magnifying the problems the world has historically faced, the company is also highlighting AI projects built and designed for a good cause.
For example, the work of Drivers’ cooperative in New York, an app used — and owned — by more than 5,000 rideshare drivers, is helping gig workers gain real agency in the ridesharing industry.
Another example is a black-owned business in Maryland called Melanological i.e. crowdsourcing images of dark skin for better detection of cancer and other skin conditions in response to severe racial bias in machine learning for dermatology.
“There are many examples around the world of AI systems being reliably and transparently built and used,” Larsen said.