Executive Summary

Early AI security solutions frequently overwhelmed rather than empowered security teams—with long deployment cycles, complex integrations, and uncertain ROI. Drawing on extensive frontline experience, Tyler Lalicker outlines a pragmatic path forward: leveraging proactive engineering practices to remove complexity from security teams, enabling faster deployments, clearer outcomes, and uniquely differentiated security services.

Lalicker identifies the industry's tired mantra, "do more with less," as a symptom of technology stacks that impose excessive operational burdens. Instead, AI-driven products should explicitly focus on unburdening human operators, enabling them to fully leverage their expertise and drive meaningful advances in security operations.


The Painful Past: Clarity > Complexity

As a security engineer and product builder, I've seen many sophisticated AI tools collapse under their own complexity. Early deep learning models promised transformative insights but demanded massive investments in labeled data while producing results analysts couldn't easily trust or explain. Ironically, less flashy statistical and machine learning approaches often delivered stronger performance because they required fewer assumptions and adapted better to the messy realities of security operations. Yet even these simpler solutions often turned into resource-intensive deployments, demanding extensive data engineering and custom integrations.

According to Gartner, the average security product deployment takes at least 6 months to deliver initial value—and often more than a year to reach true operational maturity. This lengthy timeline increases costs, frustrates security teams, and delays return on investment, making complexity a fundamental obstacle rather than a benefit.

Lesson Learned: Successful AI products will place complexity on the product creators, not the customers—clarity and simplicity accelerate security outcomes and maximize ROI.

<aside> 💡

“When AI solutions demand more from your teams than they give back, it isn't innovation—it's a liability. True innovation must offer simplicity, clarity, and acceleration.”

</aside>

Present-Day Breakthroughs: LLMs as the Bridge to Simplicity

As an early builder using LLMs to build commercial security products (developing Psycholinguistics back in 2021), I experienced firsthand how AI language models could translate human expertise directly into customizable, automated security workflows. Instead of analysts adapting to the limitations of rigid AI systems, the new models allowed security professionals to naturally guide AI behavior through simple text instructions.

Woo An (CEO, ex-Palantir) and I realized that combining ontology-driven AI with Forward-Deployed Security Engineering (FDSE)—embedding engineers directly with security teams to proactively manage integration—could finally bridge the challenging deployment gap in a scalable way. The Forward-Deployed approach ensures the complexity of integration for customization is owned by the product creators, freeing security teams to focus solely on their core mission.

<aside> 💡

“The big breakthrough in AI is the power to seamlessly understand and speak the language of human expertise, eliminating barriers on the road to operational excellence.”

</aside>

The Future: More Degrees of Freedom for SecOps

For too long, security teams have struggled under the mandate "do more with less." It's past time for security product creators to own and resolve their contributions to this issue. Instead of using AI as a shortcut to replace human expertise at critical decision points, AI should be an empowering tool that reduces complexity and expands operational freedom.

Security product companies can help teams genuinely do more by giving them products with less friction by using AI to: