BSidesTLV 2025 will take place IN PERSON on June 26th, 2025 at Smolarz Auditorium,
Tel Aviv University, from 08:30-19:00.
As always, amazing keynote speakers, pioneering content and fun surprises that you won't want to miss đ
Here's the detailed agenda for BSidesTLV 2025!
BSidesTLV 2025 Opening Words by conference co-founder, Keren Elazari
Nowadays, everyone knows the risks of downloading pirated softwareâjust look at all the memes about Limewire destroying computers. Yet, people still download these programs, only to find their computers infected with malware. In this talk, we'll explore an exciting case of a previously unknown malware called MassJacker, found on a pirated software site. MassJacker is a heavily protected cryptojacking malware that uses a wide range of advanced anti-analysis techniques. As we go over the techniques, weâll show how some of the code used to implement the techniques suggests a connection to another malware known as MassLogger. Once weâre done exploring the anti-analysis techniques used to protect MassJacker, weâll look at the malware and the wallets it used. In addition, weâll see how a flaw in how the malware uses AES encryption allowed us to recover crypto-wallets from previous campaigns totaling 778,531 unique addresses, with one worth over 300,000$!
AI can do certain tasks at a superhuman level of precision and speed, while utterly failing at other tasks which are trivial for humans, like counting "r" in "strawberry". At Pattern Labs, we have been working with frontier AI labs to test the offensive cyber capabilities of leading AI models. As part of our work, we have noticed many "strawberry-like" failures in vulnerability discovery & exploit development tasks, and we would like to share a few examples with you.
2025 is shaping up to be the year of the AI web agent - autonomous assistants powered by LLMs that browse the web, control applications, and carry out tasks with minimal human input. From experimental projects to production tools, these agents are now embedded in everything from productivity tools to enterprise workflows. But beneath the buzz lies a serious problem: security has not kept up. In this talk, weâll dive into the emerging attack surface of AI web agents, exploring how they can be hijacked through indirect prompt injections, context leakage, insecure configurations, and more. Using real-world demos, weâll show how a single compromised web page or clever string of text can redirect agents, exfiltrate data, or leak context from their original prompting, turning powerful automation into a security liability. Weâll examine key examples from tools like Browser-Use, showing where they go wrong and what attackers can exploit. Weâll also look briefly at the bigger picture: how agentic workflows and new inter-agent protocols (like MCP and A2A) create risks that traditional web defences arenât prepared for. If youâre experimenting with AI agents, or planning to - this talk is your early warning. Learn how attackers are already probing these systems and how to protect yourself before your helpful agent becomes your biggest liability.
Running CodeQLâs built-in queries on Redis gave me over 6,800 potential issues. Doable, maybe. But when I tried FFmpeg, I got over 51,000. Thatâs way too much for me. And how many of those are real vulnerabilities? Probably around 0.01%. The sheer number of false positives makes static code analysis impractical - who wants to manually sift through tens of thousands of results just to find a few actual security flaws? To fix this, we built AutoCQL, an open-source tool that fuses CodeQL with an LLM-driven agent. This agent autonomously navigates the code, running targeted queries to extract only the relevant context. On top of that, we introduced Guided Questioning, an advanced reasoning technique that keeps the LLM focused, improving accuracy even for complex vulnerabilities. Using this approach, we reduced false positive by up to 97% and discovered two real vulnerabilities: CVE-2025-27151 in Redis and CVE-2025-0518 in FFmpeg - within just a few hours of scanning. Join us, and letâs finally make static analysis work as it should.
This lecture provides essential strategies, technical methodologies, and real case studies from the world of threat hunting.
July 2024, a regular day, until all at once, around the world, 8.5 million Windows PCs crash, taking with them endless businesses and government organizations, disrupting, and even harming the lives of millions, and causing billions of dollars of damage. In this lecture I will take you into the technical details of what actually happened. I will also share the very valuable lessons, relevant for SW, Test and CyberSecurity engineers, architects and managers. Finally, Iâll shortly shine a light on the vital role Intel and its vPro (AMT) technology played in the recovery efforts.
Every second, 6,000 posts flood X.comâsome valuable, most just noise. Hidden in the chaos are leaked credentials, malware sightings, and attack indicators. The challenge? Spotting real threats before they get lost in the noise. This session dives into how AI can turn social media chaos into actionable cyber intel. Weâll track key cybersecurity insiders, extract Indicators of Compromise (IOCs), and put machine learning to the test. But raw AI isnât enoughâoff-the-shelf models choke on misinformation and context. The real power comes from hacking LLMs to make them sharper, faster, and more threat-aware. Through real-world data and hands-on analysis, weâll break down what works, what fails, and how security teams can tweak AI to stay ahead of attackers. No fluffâjust hard-hitting lessons on AIâs role in threat intelligence. If you want to see AI in action, with all its strengths and weaknesses laid bare, buckle up. This talk is for you.
Various online forums allow users to post pseudonymously on a wide range of worldly matters, many tinged with deeply personal political or religious undertones. Although the posters may hide behind a veil of an obscure handle, the very elements of the discussions can provide deep insights about them including personal characterizations that could be used to identify the posters in the real world. All this is known by corporate (and government) intelligence analysis, who may use sophisticated tools and researchers to collect and collate this information. However today, thanks to widespread use of generative AI like ChatGPT, any noob can get it on the action. This talk will demonstrate this approach to reveal insights about various posters to popular, public blogs. In line with the premise of the talk, all complex code and analysis will be handled by AI: no human brain cells will be harmed ... or exercised.
Consider an agentic AI assistant configured with a third-party Model Context Protocol (MCP) server for augmented functionalities, alongside its inherent database access capabilities. However, this external server harbors malicious intent. It compromises the credentials of each established connection and then subsequently provides tampered MCP tool descriptions containing hidden malicious instructions. These concealed instructions coerce the AI assistant into unknowingly transmitting sensitive data to the attacker. This multi-stage attack exploits trust in third-party integrations with agentic protocols and the autonomous nature of agents, transforming what was once considered fantasy into alarming reality. The cutting-edge technology employed by Agentic AI systems integrated with Agentic Protocols allows them to connect external tools and agents out-of-the-box. While this impressive flexibility unlocks new potential, it also gives rise to significant new and complex security threats that require careful consideration and proactive defense strategies. In this talk, we will provide a concise introduction to the threats inherent to key components of agents, like memory and planning modules following which, we will examine the impact of architectural decisions on security, specifically focusing on threats associated with prominent interaction mechanisms such as Anthropic's Model Context Protocol (MCP) and Google's Agent-to-Agent (A2A) protocol, which facilitate connections between models, tools, and autonomous agents. Finally, we will discuss how adhering to security best practices can help mitigate these threats.
Cryptography is the backbone of modern digital security, safeguarding everything from personal communications to critical infrastructure. But the advent of quantum computing threatens to upend the foundations of todayâs cryptographic systems. Algorithms once thought unbreakable are now tagged with expiration date. In this talk, weâll explore the looming impact of quantum computers on cybersecurity, demystify the quantum threat, and examine how itâs already influencing security protocols. Youâll gain a clear understanding of what post-quantum cryptography is, why it matters nowânot just in the futureâand how to start preparing for the cryptographic shift ahead. Whether you're a security professional, developer, or curious hacker, this session will equip you with the insights to navigate the quantum challenge.
Smolarz Auditorium
Tel Aviv University
Up to 1200 attendees
Full Day Event
08:30 - 19:00
In-Person Experience
Industry-leading experts sharing cutting-edge insights
30+ sessions covering the latest in cybersecurity
20+ challenges to test your skills
Be the first to know when we announce the new date and updated agenda!