Ethical AI Team Says Bias Bounties Can More Quickly Expose Algorithmic Flaws

Bias in AI systems is becoming a major obstacle to efforts to integrate the technology more broadly into our society. A new initiative that rewards researchers for finding any biases in AI systems could help solve the problem.

The effort is modeled after bug bounties that software companies pay cybersecurity experts who alert them.f possible security flaws in their products. The idea is not new; There was a “bias bounty”. First proposed By AI 2018 Researcher and entrepreneur JB Rubinowitz and various organizations have already met such challenges.

But the new effort seeks to create a continuous forum for bias bounty contests independent of any particular organization. Made up of volunteers from several companies, including Twitter, the so-called “bias buccaneers” plan to organize regular competitions or “rebellions” and launched the first such challenge earlier this month.

Bug bounties are a standard practice in cybersecurity that have yet to gain a foothold in the algorithmic bias community,” orgaNizar Tell them on their website. “While early one-on-one programs have shown enthusiasm for prizes, Buys Buccaneers is the first non-profit organization that aims to create an ongoing revolution, collaborate with technology companies, and pave the way for transparent and reproducible evaluation of AI systems.”

Also Read :  Democratic candidates get "vile" calls after personal cell phone numbers put on opponents' mailings

This first competition aims to combat bias in image detection algorithms, but instead of targeting specific AI systems to people, Competition ch will doCommit researchers to building tools that can detect biased datasets. The idea is to build a machine learning model that can accurately label each image in a dataset with its skin tone, perceived gender, and age group. The competition will end on November 30 and there is a first prize of $6,000, a second prize of $4,000 and a third prize of $2,000.

The challenge is based on the fact that often the source of algorithmic bias is not so much the algorithm, but the nature of the data it is trained on. Automated tools that can quickly assess how balanced a collection is of Images are associated with attributes that are sources of discrimination that can help AI researchers avoid overtly biased data sources.

But organizers say that’s just the first step in an effort to create a toolkit for assessing bias in datasets, algorithms and applications, and eventually create standards for how to do it.l Algorithmic bias, including fairness and clarification.

Also Read :  MEGAWORLD  taps into data analytics and artificial intelligence to roll out innovations  – Manila Bulletin

is This is not the only attempt. One of the new leaders initiative Twitter’s Rumman Chaudhary, who helped organize the first AI bias bounty contest last year, targeted the algorithms used to crop pictures. Users complained prefer white-skinned and male faces compared to black and female ones.

The competition gave hackers access to the company’s model and challenged them to find flaws in it. entrants A wide range of problems were foundincludingPreference for stereotypically beautiful faces, An abomination People with white hair (a marker of age), and A preference for memes with English instead of Arabic script.

Stanford University also recently concluded a competition that challenged teams to create tools designed to help people audit commercially deployed or open-source AI systems for discrimination. And current and upcoming EU laws may mandate companies to regularly audit their data and algorithms.

But taking AI bug bounties and algorithmic auditing mainstream and making them effective will be easier said than done. Inevitably, companies that build their businesses on their algorithms are going to resist any attempt to discredit them.

Also Read :  'Rewind AI' records everything you do on your Mac

Based on the lessons of the audit system In other domains, such as finance and environmental and health regulation, researchers Recently outlined Some important factors for effective accountability. One of the most important criteria They recognized that independent third parties had meaningful involvement.

The researchers pointed out that current voluntary AI audits often have conflicts of interest, such as the target organization paying for the audit, helping shape the scope of the audit, or having the opportunity to review the findings before they are released. A recent report of Algorithmic Justice Leaguewhich noted the external shaped The Role of Targeted Organizations in Current Cybersecurity Bug Bounty Programs

Finding a way to fund and support truly independent AI auditors and bug hunters will be a significant challenge, especially as they go up against some of the world’s best-resourced companies. Fortunately, however, there is a growing feeling in the industry that addressing this issue will be critical to maintaining user confidence in their services.

Image credit: Jakob Rosen / Unsplash



Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button