Rob Chesley Partner Enablement Manager, TD SYNNEX
Deepfakes and human error in the context of artificial intelligence (AI) pose significant challenges to cybersecurity. In 2019, the CEO of a U.K. energy firm was tricked into believing he was speaking to the chief executive of the company’s parent company. The deepfake impersonated the chief executive’s voice and convinced the CEO to transfer €220,000 to a supposed Hungarian supplier’s bank account.1
Deepfakes and Cybersecurity
Deepfakes use AI-driven technology, such as generative adversarial networks (GANs), to create hyper-realistic fake videos, audio or images that mimic real people. These can undermine cybersecurity in the following ways:
· Social Engineering and Phishing Attacks: Deepfakes can be used to impersonate individuals, such as executives or employees, tricking others into divulging sensitive information or transferring funds.
· Disinformation and Manipulation: Deepfakes can be used to spread misinformation or disinformation, which can have a severe impact on public trust. For companies, this could mean reputation damage, leading to stock market manipulation or other economic impacts.
· Bypassing Security Systems: Some cybersecurity systems, like biometric authentication, rely on facial recognition or voice verification. Deepfakes can potentially fool these systems, granting unauthorized access to attackers.
Human Error and AI in Cybersecurity
AI systems are only as good as the data they’re trained on and the humans who configure or use them. Human error can introduce vulnerabilities that attackers exploit:
· Misconfiguration of AI Systems: AI models for cybersecurity, such as those for threat detection, require precise configuration. If set up incorrectly, they may overlook malicious activity or generate false positives, overwhelming security teams.
· Data Handling Errors: Human errors in labeling, processing or curating data can cause AI systems to make inaccurate predictions or decisions. This could lead to misidentification of threats or wrongful actions being taken.
· Trust in AI Decisions: Over-reliance on AI decisions without human oversight can be risky. If AI systems are manipulated (e.g. through adversarial attacks), humans might blindly trust these incorrect outputs, causing security breaches.
Both deepfakes and human error in AI can undermine decisions through social engineering, data manipulation or system vulnerabilities. Addressing these issues requires a combination of technical solutions and human resources.
TD SYNNEX ServiceSolv can support and guide partners through AI Launchpad engagement focused on aligning AI strategy with business objectives. The engagement will cover stakeholder interviews, use case identification, prioritization and creating a foundation for execution and tracking to ensure AI solutions are actionable and consumable by the organization. Our objective is to prepare the partner for AI initiatives by establishing a roadmap for success and highlighting high-impact areas for AI adoption. For any questions regarding ServiceSolv, reach out to ServiceBD@tdsynnex.com. For general cybersecurity inquiries, use CyberSolv@tdsynnex.com.
1 https://www.fortinet.com/resources/cyberglossary/deepfake