As cyber threats accelerate, adopting a Zero Trust mindset and digital mindfulness has become essential for both individuals and organizations. Anna Collard, SVP of Content Strategy & Evangelist at KnowBe4 Africa, shares her expertise on cybersecurity, AI integration, and initiatives like the MiDO cyber academy, which empowers underprivileged youth. She also discusses the psychological aspects of technology use and offers strategies to enhance cybersecurity awareness and ethical practices in South Africa’s digital landscape.
How does the Zero Trust mindset enhance cybersecurity, and what role does digital mindfulness play in protecting individuals and organizations online?
In cybersecurity, the Zero Trust framework that has been around for several years assumes that no entity—user, system, or network—should be trusted by default, until verified. However, technology alone cannot address all cybersecurity challenges; human behavior remains a critical risk factor. This is where digital mindfulness and the Zero Trust mindset intersect and complement each other.
My research on social engineering identifies 33 human susceptibility factors that cybercriminals exploit, and mindfulness-based interventions have been shown to positively impact 23 of them. By improving cognitive resilience, mindfulness helps individuals recognize manipulation tactics, resist impulsive reactions, and engage in critical thinking.
Applying the Zero Trust mindset to human behavior means cultivating skepticism and vigilance in all digital interactions. Digital mindfulness encourages these habits by improving cognitive performance, emotional regulation, and meta-awareness, which are key defenses against urgency-based scams, authority pressure, and phishing attempts. Organizations that integrate digital mindfulness into security awareness programs empower employees to recognize when critical thinking is needed, strengthening security culture and complementing Zero Trust strategies.
What challenges would a national cybersecurity helpline address in South Africa, and how could it improve public safety and awareness of cyber threats?
We are actively fundraising to launch a national cybersecurity helpline in South Africa, which could be a game-changer in improving public safety and awareness of cyber threats. The proposed SA Cyber Helpline aims to provide essential, first-line support for South Africans impacted by cybercrime, addressing needs for victim assistance and providing valuable, hands-on experience for MiDO Academy cybersecurity students.
Driven out of the MiDO Academy and supported by the UK CyberHelpline and our partner network, this multi-stakeholder initiative combines cybersecurity expertise, victim support tools, and localized training. It also provides much needed work experience and training for our cybersecurity students.
Cybercrime disproportionately affects individuals and small businesses acking the expertise to navigate complex digital threats. Many victims of financial fraud, sextortion scams, or phishing attacks do not know where to turn for help, leading to underreporting and an ongoing cycle of exploitation. The need for this support is urgent. According to SABRIC South Africa saw over 52,000 cases of digital banking fraud, with losses exceeding R1 billion in 2023 alone. Additionally, 92,959 CyberTipline complaints of suspected child sexual abuse material (CSAM) were reported from South Africa to US-based National Center of Missing and Exploited Children, and the FBI Internet Crime Center IC3 received 1,290 cybercrime reports from South Africa in 2023, revealing the scope of cyber victimization. Since most cases go unreported, the real figure is much larger.
The SA Cyber Helpline could provide:
- Immediate guidance for victims of cybercrime, reducing panic and empowering them with next steps.
- Public awareness campaigns to increase cyber resilience across different demographics.
- Incident reporting mechanisms to improve national intelligence on cyber threats and help law enforcement take targeted action.
Please get in touch if you can help support this initiative.
What are the key risks and benefits of integrating AI in cybersecurity, and how should organizations navigate these when crafting their security strategies?
Artificial Intelligence (AI) presents both powerful opportunities and emerging risks in cybersecurity. On the one hand, AI-powered security tools enhance threat detection, automate response mechanisms, and analyze massive datasets faster than human analysts. AI is particularly valuable for detecting anomalies, predicting cyberattacks, and bolstering defensive capabilities through automated threat intelligence.
On the other hand, cybercriminals and state-sponsored actors are weaponizing AI to create more sophisticated threats—such as deepfakes, automated phishing, AI-generated malware, and cognitive manipulation attacks. This raises the need for organizations to:
- Adopt AI responsibly, ensuring transparency and bias mitigation in AI-driven security solutions, as well as ensure AI implementation are done securely, and with sound security principles in mind, such as least privilege and restrictions around what data it has access to (i.e. think data protection, privacy) and to not inadvertently expand the attack surface by introducing new vulnerabilities
- Continuously test their own AI models against adversarial attacks to ensure resilience.
- Invest in human-AI collaboration, using AI to augment cybersecurity teams rather than replace human decision-making.
- Enhance threat intelligence sharing, as AI-driven cyber threats require a collective defense approach across industries.
- Train employees on AI-driven threats, ensuring that security awareness keeps pace with emerging attack vectors and that users understand what to do and what not to do when using their AI assistants and chatbots.
- Utilize AI agents: I’ve written quite a lot recently about AI agents and the benefits and risks they pose.
A balanced approach—leveraging AI’s defensive capabilities while remaining vigilant against its misuse—is key to crafting resilient security strategies. Organizations need to guard against their greatest enemy, their own complacency, while at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the latest AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity.
Can you share insights into the MiDO cyber academy’s initiatives for empowering underprivileged youth in South Africa?
The MiDO Cyber Academy bridges the cybersecurity skills gap while creating pathways out of poverty for underprivileged youth in South Africa. Through a structured training program, we provide students with:
- Industry-relevant cybersecurity skills that align with global certification requirements and employer needs.
- Professional development, including mentorship and exposure to real-world cybersecurity challenges.
- Work experience opportunities, ensuring graduates are job-ready and connected to potential employers, internship or learnership opportunities
- Community building, promting peer networks that support long-term career growth and knowledge sharing. The MiDO Tribe is the network of previous MiDO alumni and new students.
The initiative addresses two critical issues: youth unemployment and the cybersecurity talent shortage. By equipping young people with digital skills, we not only improve their economic prospects but also strengthen South Africa’s overall cybersecurity resilience. Partnerships with private sector organizations, educational institutions, and government bodies help scale these efforts and ensure sustainable impact.
As a cyber-psychologist, how do human factors shape technology use and security practices? What strategies would you suggest to cultivate a culture of security awareness and ethical technology use?
As a cyber-psychologist, I have seen firsthand how technology adoption and security behaviors are deeply influenced by human factors. People often make security decisions based on convenience, cognitive biases, and emotional triggers rather than rational assessment. Social engineering attacks exploit these vulnerabilities by leveraging fear, urgency, or trust to manipulate individuals into compromising security.
To cultivate a culture of security awareness and ethical technology use, organizations should:
- Replace fear-based awareness campaigns and focus on practical, actionable security behaviors.
- Integrate behavioral science into security training, using gamification, habit formation, and real-world scenarios to reinforce learning.
- Encourage a security-conscious workplace culture, where reporting suspicious activity is normalized rather than feared.
- Promote digital mindfulness practices, helping individuals develop cognitive resilience against manipulative tactics used by cybercriminals. This also includes creating healthier digital habits that will improve not only cybersecurity but overall wellbeing.
KnowBe4’s holistic human risk management strategy goes beyond traditional security training to address the full spectrum of human risk factors. By leveraging security culture assessments, simulated phishing, behavioral analytics, and continuous reinforcement training, such as in the moment nudges and prompt features, organizations can create a holistic defense against cyber threats targeting the human layer.
Ultimately, cybersecurity is a human challenge as much as a technical one. Addressing security through the lens of psychology, behavior, and education is key to building a digitally resilient society.