Social Engineering
Manipulation techniques that exploit human psychology rather than technical vulnerabilities to trick people into revealing sensitive information, granting access, or transferring money.
Social engineering attacks target people, not systems. Instead of exploiting a software vulnerability, an attacker exploits trust, urgency, authority, or fear to manipulate someone into taking an action that compromises security. Phishing is the most common form, but social engineering extends far beyond email.
Common Techniques
- Phishing: Fraudulent emails, SMS, or messages impersonating trusted entities.
- Pretexting: Creating a fabricated scenario (“I’m from IT, we need your credentials to fix an urgent issue”).
- Baiting: Leaving infected USB drives in parking lots or offering free downloads that contain malware.
- Tailgating: Physically following an authorized person through a secure door.
- Vishing: Voice phishing via phone calls, often spoofing legitimate numbers.
- CEO fraud: Impersonating executives to request urgent wire transfers or sensitive data.
Why Technical Defenses Aren’t Enough
A firewall cannot block an employee from voluntarily sharing credentials over the phone. Encryption doesn’t help when someone is tricked into decrypting and sending data themselves. Social engineering bypasses technical controls by targeting the human layer.
Reducing the Risk
- 2FA: Even if credentials are handed over, 2FA prevents account takeover. Hardware keys are especially effective because they can’t be phished remotely.
- Password managers: Auto-fill only works on the correct domain, which neutralizes credential phishing on lookalike sites.
- Verification culture: Train teams to verify unusual requests through a separate channel (call back on a known number, walk over to confirm in person).
- Simulated attacks: Regular, realistic phishing simulations build recognition skills without shaming employees who fall for them.
- Reporting processes: Make it easy and blame-free to report suspicious contact. Every unreported attempt is a missed defense signal.
The AI Escalation
Large language models enable attackers to generate highly convincing, personalized social engineering messages in any language at scale. The grammatical errors and awkward phrasing that once helped identify phishing attempts are disappearing. This makes process-based defenses (2FA, verification procedures, password managers) more important than ever, because training alone cannot keep pace.