Innovation thrives when people can collaborate in a trusted manner, leveraging data creatively and freely through technology. Trusted interactions lead to the creation of value for a company, but the intersection between end-user and data is also the point of greatest vulnerability for an enterprise, and the primary source of breaches driving cyber risk to all-time highs. How can security professionals know if an end-user login is the result of an employee’s coffee-shop WiFi access or an attacker abusing authorized credentials? How do they know whether a user identity is behaving consistently or erratically on the network compared to an established routine?
Knowing and acting on the difference between an individual legitimately trying to get their job done and a compromised identity is the difference between innovation and intellectual property (IP) loss, the difference between an organization’s success or failure. As data and digital experiences are placed into the hands of others, the concept of trust becomes even more crucial. Businesses can rise or fall based on trust—companies abusing their customers’ trust face millions or billions of dollars in regulatory fines and lost market value, as in the case of Facebook and Cambridge Analytica.
In addition to the myriad of constantly evolving threats in today’s landscape, organizations are hampered by an ongoing skills shortage— analysts predict a shortfall of 3.5 million cybersecurity jobs by 2021. In an attempt to fill the void, organizations have turned to the promise of big data, artificial intelligence (AI), and machine learning.
The buzz for cybersecurity AI is palpable. In the past two years, the promise of machine learning and AI has enthralled and attracted marketers and media, with many falling victim to feature misconceptions and muddy product differentiations. In some cases, AI start-ups are concealing just how much human intervention is involved in their product offerings. In others, the incentive to include machine learning-based products is one too compelling to ignore, if for no other reason than to check a box with an intrigued customer base.
Aside from the technology, investment is another troublesome area for cybersecurity AI. Venture capitalists seeding AI firms expect a timely return on investment, but the AI bubble has many experts worried. Michael Woodridge, head of Computer Science at the University of Oxford, has expressed his concern that overhyped “charlatans and snake-oil salesmen” exaggerate AI’s progress to date. Researchers at Stanford University launched the AI Index, an open, not-for-profit project meant to track activity in AI. In their 2017 report, they state that even AI experts have a hard time understanding and tracking progress across the field.
We present to you the “Top 10 Forcepoint Consulting/Services Companies - 2019.”