Agenda

BSidesTLV 2025 will take place IN PERSON on June 26th, 2025 at Smolarz Auditorium,
Tel Aviv University, from 08:30-19:00.

As always, amazing keynote speakers, pioneering content and fun surprises that you won't want to miss 😉

📅 Live Schedule

Here's the detailed agenda for BSidesTLV 2025!

Thursday, December 11, 2025

Smolarz

BSidesTLV 2025 Opening Words

BSidesTLV 2025 Opening Words by conference co-founder, Keren Elazari

Keren Elazari
09:00 - 09:10

Break

09:40 - 09:45

Captain MassJacker Sparrow: Uncovering the Malware's Buried Treasure

Nowadays, everyone knows the risks of downloading pirated software—just look at all the memes about Limewire destroying computers. Yet, people still download these programs, only to find their computers infected with malware. In this talk, we'll explore an exciting case of a previously unknown malware called MassJacker, found on a pirated software site. MassJacker is a heavily protected cryptojacking malware that uses a wide range of advanced anti-analysis techniques. As we go over the techniques, we’ll show how some of the code used to implement the techniques suggests a connection to another malware known as MassLogger. Once we’re done exploring the anti-analysis techniques used to protect MassJacker, we’ll look at the malware and the wallets it used. In addition, we’ll see how a flaw in how the malware uses AES encryption allowed us to recover crypto-wallets from previous campaigns totaling 778,531 unique addresses, with one worth over 300,000$!

Ari Novick
09:45 - 10:10

Break

10:10 - 10:15

Hey AI, how many "r" in "buffer overflow"?

AI can do certain tasks at a superhuman level of precision and speed, while utterly failing at other tasks which are trivial for humans, like counting "r" in "strawberry". At Pattern Labs, we have been working with frontier AI labs to test the offensive cyber capabilities of leading AI models. As part of our work, we have noticed many "strawberry-like" failures in vulnerability discovery & exploit development tasks, and we would like to share a few examples with you.

Yoni Rozenshein
10:15 - 10:25

Break

10:25 - 10:30

Agentic Exposure: Hijacking Web-Browsing AI Assistants

2025 is shaping up to be the year of the AI web agent - autonomous assistants powered by LLMs that browse the web, control applications, and carry out tasks with minimal human input. From experimental projects to production tools, these agents are now embedded in everything from productivity tools to enterprise workflows. But beneath the buzz lies a serious problem: security has not kept up. In this talk, we’ll dive into the emerging attack surface of AI web agents, exploring how they can be hijacked through indirect prompt injections, context leakage, insecure configurations, and more. Using real-world demos, we’ll show how a single compromised web page or clever string of text can redirect agents, exfiltrate data, or leak context from their original prompting, turning powerful automation into a security liability. We’ll examine key examples from tools like Browser-Use, showing where they go wrong and what attackers can exploit. We’ll also look briefly at the bigger picture: how agentic workflows and new inter-agent protocols (like MCP and A2A) create risks that traditional web defences aren’t prepared for. If you’re experimenting with AI agents, or planning to - this talk is your early warning. Learn how attackers are already probing these systems and how to protect yourself before your helpful agent becomes your biggest liability.

Sarit Yerushalmi
10:30 - 11:15

Break

11:15 - 11:30

Break

11:55 - 12:00

Find the Needle in the Haystack: Real-World Vulnerability Hunting Using LLM

Running CodeQL’s built-in queries on Redis gave me over 6,800 potential issues. Doable, maybe. But when I tried FFmpeg, I got over 51,000. That’s way too much for me. And how many of those are real vulnerabilities? Probably around 0.01%. The sheer number of false positives makes static code analysis impractical - who wants to manually sift through tens of thousands of results just to find a few actual security flaws? To fix this, we built AutoCQL, an open-source tool that fuses CodeQL with an LLM-driven agent. This agent autonomously navigates the code, running targeted queries to extract only the relevant context. On top of that, we introduced Guided Questioning, an advanced reasoning technique that keeps the LLM focused, improving accuracy even for complex vulnerabilities. Using this approach, we reduced false positive by up to 97% and discovered two real vulnerabilities: CVE-2025-27151 in Redis and CVE-2025-0518 in FFmpeg - within just a few hours of scanning. Join us, and let’s finally make static analysis work as it should.

Simcha K
12:00 - 12:25

Break

12:25 - 12:30

Nation-Level Threat Hunting - Strategies and real case studies

This lecture provides essential strategies, technical methodologies, and real case studies from the world of threat hunting.

Itamar Amar
12:30 - 12:55

Lunch

12:55 - 14:00

Don’t Panic! - CrowdStrike, the biggest PC cyber-attack that never was and its lessons

July 2024, a regular day, until all at once, around the world, 8.5 million Windows PCs crash, taking with them endless businesses and government organizations, disrupting, and even harming the lives of millions, and causing billions of dollars of damage. In this lecture I will take you into the technical details of what actually happened. I will also share the very valuable lessons, relevant for SW, Test and CyberSecurity engineers, architects and managers. Finally, I’ll shortly shine a light on the vital role Intel and its vPro (AMT) technology played in the recovery efforts.

Gal Urbach
14:00 - 14:25

Break

14:25 - 14:30

LLMs Suck at Cyber Intel—Unless You Hack Them Right

Every second, 6,000 posts flood X.com—some valuable, most just noise. Hidden in the chaos are leaked credentials, malware sightings, and attack indicators. The challenge? Spotting real threats before they get lost in the noise. This session dives into how AI can turn social media chaos into actionable cyber intel. We’ll track key cybersecurity insiders, extract Indicators of Compromise (IOCs), and put machine learning to the test. But raw AI isn’t enough—off-the-shelf models choke on misinformation and context. The real power comes from hacking LLMs to make them sharper, faster, and more threat-aware. Through real-world data and hands-on analysis, we’ll break down what works, what fails, and how security teams can tweak AI to stay ahead of attackers. No fluff—just hard-hitting lessons on AI’s role in threat intelligence. If you want to see AI in action, with all its strengths and weaknesses laid bare, buckle up. This talk is for you.

Inga Cherny
14:30 - 15:15

Break

15:15 - 15:20

Unmasking online commentators like a Noob

Various online forums allow users to post pseudonymously on a wide range of worldly matters, many tinged with deeply personal political or religious undertones. Although the posters may hide behind a veil of an obscure handle, the very elements of the discussions can provide deep insights about them including personal characterizations that could be used to identify the posters in the real world. All this is known by corporate (and government) intelligence analysis, who may use sophisticated tools and researchers to collect and collate this information. However today, thanks to widespread use of generative AI like ChatGPT, any noob can get it on the action. This talk will demonstrate this approach to reveal insights about various posters to popular, public blogs. In line with the premise of the talk, all complex code and analysis will be handled by AI: no human brain cells will be harmed ... or exercised.

Ari Trachtenberg
15:20 - 15:30

Agent Autonomy Exploited: Navigating the Security of Integrated AI Protocols

Consider an agentic AI assistant configured with a third-party Model Context Protocol (MCP) server for augmented functionalities, alongside its inherent database access capabilities. However, this external server harbors malicious intent. It compromises the credentials of each established connection and then subsequently provides tampered MCP tool descriptions containing hidden malicious instructions. These concealed instructions coerce the AI assistant into unknowingly transmitting sensitive data to the attacker. This multi-stage attack exploits trust in third-party integrations with agentic protocols and the autonomous nature of agents, transforming what was once considered fantasy into alarming reality. The cutting-edge technology employed by Agentic AI systems integrated with Agentic Protocols allows them to connect external tools and agents out-of-the-box. While this impressive flexibility unlocks new potential, it also gives rise to significant new and complex security threats that require careful consideration and proactive defense strategies. In this talk, we will provide a concise introduction to the threats inherent to key components of agents, like memory and planning modules following which, we will examine the impact of architectural decisions on security, specifically focusing on threats associated with prominent interaction mechanisms such as Anthropic's Model Context Protocol (MCP) and Google's Agent-to-Agent (A2A) protocol, which facilitate connections between models, tools, and autonomous agents. Finally, we will discuss how adhering to security best practices can help mitigate these threats.

Idan HablerItsik Mantin
15:30 - 15:55

Break

15:55 - 16:10

Quantum Leap to Breaking Cryptography

Cryptography is the backbone of modern digital security, safeguarding everything from personal communications to critical infrastructure. But the advent of quantum computing threatens to upend the foundations of today’s cryptographic systems. Algorithms once thought unbreakable are now tagged with expiration date. In this talk, we’ll explore the looming impact of quantum computers on cybersecurity, demystify the quantum threat, and examine how it’s already influencing security protocols. You’ll gain a clear understanding of what post-quantum cryptography is, why it matters now—not just in the future—and how to start preparing for the cryptographic shift ahead. Whether you're a security professional, developer, or curious hacker, this session will equip you with the insights to navigate the quantum challenge.

Erez Waisbard
16:10 - 16:35

Break

16:35 - 16:40

Break

17:25 - 17:30

CTF Winners Announcement

17:30 - 17:55

Closing words

17:55 - 18:00

📍 Venue

Smolarz Auditorium

Tel Aviv University

Up to 1200 attendees

⏰ Format

Full Day Event

08:30 - 19:00

In-Person Experience

What to Expect at BSidesTLV 2025

🎤

Keynote Speakers

Industry-leading experts sharing cutting-edge insights

🚀

Technical Sessions

30+ sessions covering the latest in cybersecurity

🏆

CTF Competition

20+ challenges to test your skills

Stay Updated

Be the first to know when we announce the new date and updated agenda!