From the Document: "'Deep fakes'--a term that first emerged in 2017 to describe realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies--could present a variety of national security challenges in the years to come. As these technologies continue to mature, they could hold significant implications for congressional oversight, U.S. defense authorizations and appropriations, and the regulation of social media platforms. [...] Some analysts have noted that algorithm-based detection tools could lead to a cat-and-mouse game, in which the deep fake generators are rapidly updated to address flaws identified by detection tools. For this reason, they argue that social media platforms--in addition to deploying deep fake detection tools--may need to expand the means of labeling and/or authenticating content. This could include a requirement that users identify the time and location at which the content originated or that they label edited content as such. Other analysts have expressed concern that regulation of deep fake technology could impose undue burden on social media platforms or lead to unconstitutional restrictions on free speech and artistic expression. These analysts have suggested that existing law is sufficient for managing the malicious use of deep fakes. Some experts have asserted that responding with technical tools alone will be insufficient and that instead the focus should be on the need to educate the public about deep fakes and minimize incentives for creators of malicious deep fakes."
CRS In Focus, IF11333
Congressional Research Service: https://crsreports.congress.gov/