
One of many high challenges for risk intelligence staff is having too many knowledge feeds, in keeping with the July 2025 Risk Intelligence Benchmark report from Forrester Consulting and Google Cloud.
Forrester Consulting surveyed greater than 1,500 IT and cybersecurity leaders throughout 12 industries from nations, together with UK, Australia, and Japan.
The best challenges to enhancing risk intelligence
In accordance with the report, the most difficult knowledge and analytics hurdles for enhancing risk intelligence capabilities had been:
- Too many risk intelligence knowledge feeds (61%).
- Lack of expert risk analysts (60%).
- Issue making the info actionable (59%).
- Issue verifying the validity and/or relevance of threats (59%).
- Issue figuring out which intelligence applies (49%).
In whole, 82% of respondents reported worrying or very involved about lacking threats as a result of overwhelming variety of alerts and knowledge.
Sixty-one p.c of respondents cited the abundance of feeds as a problem, whereas 60% pointed to a scarcity of expert analysts.
This quantity of knowledge additionally hinders collaboration. In accordance with the report, 66% of respondents stated that they had problem sharing risk intelligence with the suitable groups.
AI helps summarize data for risk intelligence staff
AI instruments are more and more getting used to handle the quantity of data dealing with safety groups. In accordance with the report, 69% of respondents stated that producing summaries was essentially the most helpful utility of generative AI for risk intelligence. Different cited advantages included:
- Bettering the aptitude to prioritize threats and vulnerabilities (68%).
- Making risk intelligence extra accessible to stakeholders (68%).
- Offering actionable suggestions to assist junior analysts (63%).
- Releasing up time for high-priority duties (60%).
- Bettering decision-making (50%).
“Apparently we didn’t see a transparent main use of AI however fairly a grouping of advantages throughout summarization, prioritization, and communication,” stated Jayce Nichols, director of Google Risk Intelligence Group at Google Cloud, in an e mail to TechRepublic. “Clearly organizations are nonetheless determining one of the best methods for his or her groups to make use of AI.”
Nevertheless, AI can even introduce errors. Google Cloud recommends incorporating AI into workflows in a safe and monitorable approach.
“Developments within the high quality of AI response enhance day-after-day, however people ought to nonetheless do their due diligence and double verify the important thing elements of the outputs earlier than taking motion,” stated Nichols. “This mirrors the response we noticed within the research the place 81% of respondents say they belief the usage of AI in risk intelligence from notable distributors solely.”
Prioritize high-stakes belongings and know your adversaries
Based mostly on the report’s findings, Google Cloud advises organizations to establish the high-stakes wants of their enterprise first to find out the place to allocate risk intelligence sources extra successfully. Understanding which adversaries are almost definitely to focus on a given group or sector is one other key technique.
Google Cloud additionally recommends enhancing communication amongst risk intelligence, incident response (IR) groups, and the safety operations middle (SOC) to reinforce the usefulness of shared risk intelligence.
SOC and IR analysts ought to prioritize duties that allow them to work extra effectively, resembling proactive risk searching, contextualizing alerts, growing customized detections, and supporting incident response efforts.
In the meantime, risk intelligence groups are inspired to trace efficiency utilizing essential metrics like imply time to reply, alert constancy, blocked threats, and the variety of threats recognized by intelligence-led searching.
How uncovered is your SharePoint occasion? This breakdown of CVE exploits and hacker ways is your first step towards locking it down.