Cybersecurity professionals in the United States are logging an average of 10.8 extra hours each week beyond their contracted schedules, based on a survey conducted by Sapio Research involving 300 cybersecurity and IT leaders. This substantial overtime effectively adds an additional working day to the typical week for many in the field. Alarmingly, nearly half of those surveyed reported working at least 11 hours of overtime weekly, with one in five indicating they clocked more than 16 additional hours.
The psychological toll of this workload is evident. Almost half of the respondents expressed that their jobs feel more emotionally exhausting than rewarding, a feeling that is particularly strong among C-level executives. Many reported being unable to take time off without returning to a significant backlog of stress, and about one-third mentioned experiencing weekly anticipatory anxiety regarding the upcoming work week.
Despite the mounting pressure, a remarkable 94% of respondents stated they would choose a career in cybersecurity again, with most indicating they would do so without hesitation.
The Skills Profile is Shifting
Over 80% of cybersecurity leaders reported that interpersonal skills, such as communication, influence, and stakeholder management, have become more critical to their effectiveness than they were five years ago. The rise of AI tools is accelerating this shift, with the majority of leaders acknowledging increased pressure to enhance their communication and business skills as a direct consequence.
The recognition of the importance of people skills varies with the size of the organization. Leaders from smaller enterprises noted this shift more frequently than their counterparts in larger companies. Most respondents now find that their roles require considerable cross-functional collaboration and alignment with broader business strategies.
Governance Outranks Engineering
When asked about the capabilities that will shape the cybersecurity professional of the future, 73% of leaders identified AI oversight and governance as the top priority. This was followed by technological and engineering proficiency, with cross-functional communication, business strategy, and leadership being cited by about half of the respondents.
This data suggests a profession moving away from manual technical execution. Cybersecurity practitioners are increasingly expected to manage automated systems, audit AI outputs, and align security decisions with organizational goals.
Many companies are adding AI governance responsibilities to security leaders' roles without adjusting their job structures. Ravid Circus, CPO at Seemplicity, emphasized that layering on AI oversight responsibilities without a reorganization of teams only accelerates burnout. He advocates for a redesign of organizational structures to effectively manage these new responsibilities.
Circus also noted that dedicated AI governance roles must be integrated into security teams with clear accountability. This includes formal ownership of AI outputs, defined escalation paths for when automation fails, and frameworks specifying when human intervention is necessary. He urged organizations to view AI adoption as a leadership transformation, stressing the need for clarity on accountability when issues arise.
Budget Exists, Training Does Not
Close to two-thirds of respondents indicated that their organizations have adequate budgets to implement AI features, with smaller organizations expressing the most confidence. However, over half of respondents across various organization sizes described the training available for human-AI collaboration as either limited or inadequate.
Circus pointed out that budget constraints are not the issue. Organizations are investing in tools but neglecting the crucial next step of role-specific training. This training should address practical questions security leaders face daily, such as how to validate AI system reports, when to override them, and how to explain AI-driven decisions to boards or regulators.
Furthermore, Circus highlighted the lack of structured frameworks for human-in-the-loop workflows. Most teams are improvising accountability in real time, lacking clear guidelines for when AI operates autonomously, when it makes recommendations for human decision-making, and when humans take full control. This ambiguity contributes significantly to decision fatigue and operational friction.
Trust Requires Transparency and Control
Cybersecurity leaders prioritize consistent and measurable accuracy over time as the primary criterion for trusting AI systems. Clear accountability, human override controls, and transparency in decision-making processes rank closely behind.
Leaders exhibit significantly higher trust in their internal teams compared to third-party vendors, a disparity attributed to visibility and oversight. Eighty-seven percent expressed a high level of trust in their internal teams to use AI responsibly, contrasted with 77% for cybersecurity vendors.
Circus emphasized the need for vendors to address this trust gap by incorporating explainability into their products. This includes providing audit trails, meaningful human override controls, and honest communication about potential model failures. He stated that when security leaders feel accountable for outcomes derived from opaque systems, it undermines trust and complicates governance.
To build trust, every AI-driven output should be traceable to its source, ensuring accountability for errors and providing mechanisms to catch mistakes before they escalate into significant issues. When vendors can effectively address these concerns, trust will naturally follow.