The Federal Trade Commission (FTC) issued a report to Congress warning about using artificial intelligence (AI) to combat online harms, which include fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.
The report presents concerns of inaccuracy, bias, discrimination, and increasingly invasive use of commercial surveillance and thus encourages legislators to create policies that ensure AI tools do not cause additional problems. Recommendations include:
- Avoiding over-reliance due to false positives and false negatives;
- Using more human oversight;
- Providing more transparency and accountability;
- Being more responsible with data science;
- Improving platform AI interventions;
- Creating tools to give individual users options for content control;
- Allowing more availability and scalability so that more than just large technology companies have access to the AI;
- Enhancing content authenticity; and
- Passing laws that change the business models or incentives allowing harmful content to proliferate.
Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, said,
Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content…Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.
The full report can be found here.
For more information on topics related to this piece, visit the HSDL In Focus about Cyber Policy, Cyber Crime & National Security, and Disinformation, or search specific resources that discuss artificial intelligence and cybersecurity.