Deepfake is an AI-based technology used to produce or alter video content so that it presents something that didn't, in fact, occur. The word, which applies to both the technologies and the videos created with it, is a portmanteau of deep learning and fake.
The technology already exists to create convincing -- but inauthentic -- audio and video content -- and those technologies are rapidly improving. Photo editing software, such as Photoshop, has long been used to falsify images, and the public is learning to apply common sense and critical thinking when presented with a picture whose content seems unlikely. However, until recently, video content has been more difficult to alter in any substantial way. As such, video has often been considered proof that something actually happened.
Because deepfakes are created through AI, they don't require the considerable skill that it would take to create a realistic video otherwise. That means that just about anyone could create a deepfake to promote their chosen agenda. One danger is that people will take such videos at face value; another is that people will stop trusting in the validity of any video content at all.
The term is named for a Reddit user known as deepfakes who, in December 2017, used deep learning technology to edit the faces of celebrities onto people in pornographic video clips.
Deepfakes and the technology behind them: