The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

Deception Detection Hackathon: Preventing AI deception

--
Signups
--
Entries
June 28, 2024 4:00 PM
 to
July 1, 2024 3:00 AM
 (UTC)
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
June 28, 2024
 and 
July 1, 2024

Sign up for the deception detection hackathon

Are you fascinated by the incredible advancements in AI? As AI becomes smarter and more powerful, it's essential that we make sure it always tells the truth and doesn't trick people. Imagine if an AI could manipulate narratives or try to scheme—that could lead to serious problems!

That's where you come in. We're inviting you to join the Deception Detection Hackathon, an exciting event where you'll team up with researchers, programmers, and AI safety experts to create amazing pilot experiments to spot when an AI is being deceptive (and potentially reducing deceptive tendencies!). Over one thrilling weekend, you'll put your skills to the test and develop cutting-edge techniques to keep AI honest and trustworthy.

Why deception detection matters

Deception in AI, a concept severely under-explored, occurs when an AI system is capable of deceiving a user, either designed for it by a malicious actor or due to misaligned goals. Such systems may appear to be aligned with users and humans values during training and evaluation but pursue malign objectives when deployed, potentially causing harm or undermining trust in AI.

Examples of such work can be found in:

To mitigate these risks, we must develop robust deception detection methods that can identify instances of strategic deception, make headway on understanding AI capabilities for deception, and prevent AI systems from misleading humans. By participating in this hackathon, you'll contribute to the critical task of ensuring that AI remains transparent, accountable, and aligned with human values.

Contribute to AGI deception research

During the hackathon, you'll have the opportunity to:

  • Learn from experts in AI safety, deceptive alignment, and strategic deception
  • Collaborate with a diverse group of participants to ideate and develop deception detection techniques
  • Create benchmarks and evaluation methods to assess the effectiveness of deception detection approaches
  • Compete for prizes and recognition for the most innovative and impactful solutions
  • Network with like-minded individuals passionate about ensuring the safety and trustworthiness of AI

Whether you're an AI researcher, developer, or enthusiast, this hackathon provides a unique platform to apply your skills and knowledge to address one of the most pressing challenges in AI safety.

Join us in late June for a weekend of collaboration, innovation, and problem-solving as we work together to prevent AI from deceiving humans. Stay tuned for more details on the exact dates, format, and registration process.

Don't miss this opportunity to contribute to the development of trustworthy AI systems and help shape a future where AI and humans can work together safely and transparently. Let's hack for a deception-free AI future!

Prizes, evaluation, and submission

You will join in teams to submit a PDF about your research according to the submission template shared in the submission tab! Depending on the judge's reviews, you'll have the chance to win from the $2,000 prize pool! Find the review criteria on the submission tab.

  • 🥇 $1,000 for the top team
  • 🥈 $600 for the second prize
  • 🥉 $300 for the third prize
  • 🏅 $100 for the fourth prize

What is a research hackathon?

The AGI Deception Detection Hackathon is a weekend-long event where you participate in teams (1-5) to create interesting, fun, and impactful research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our panel and you can win up to $1,000!

It runs from 28th June to 1st July and we're excited to welcome you for a weekend of engaging research. You will hear fascinating talks about real-world projects tackling these types of questions, get the opportunity to discuss your ideas with experienced mentors, and you will get reviews from top-tier researchers in the field of AI safety to further your exploration.

Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career. We give you code templates and ideas to kickstart your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found community!

Read more about what you can expect, the schedule, and what previous participants have said about being part of the hackathon below.

Why should I join?

There’s loads of reasons to join! Here are just a few:

  • See how fun and interesting AI safety can be
  • Get to know new people who are into the overlap of empirical ML safety and AI governance
  • Win up to $1,000, helping you towards your first H100 GPU
  • Get practical experience with LLM evaluations and AI safety research
  • Show the AI safety labs what you are able to do to increase your chances at some amazing jobs
  • Get a certificate at the end!
  • Get proof that your work is awesome so you can get that grant to pursue the AI safety research that you always wanted to pursue
  • The best teams are offered to participate in the Apart Lab program, which supports teams in their journey towards publishing groundbreaking AI safety and security research
  • And many many more… Come along!

Do I need experience in AI safety to join?

Please join! This can be your first foray into AI and ML safety and maybe you’ll realize that there are exciting low-hanging fruit that your specific skillset is adapted to. Even if you normally don’t find it particularly interesting, this time you might see it in a new light!

There’s a lot of pressure from AI safety to perform at a top level and this seems to drive some people out of the field. We’d love it if you consider joining with a mindset of fun exploration and get a positive experience out of the weekend.

What are previous experiences from the research hackathon?

Yoann Poupart, BlockLoads CTO: "This Hackathon was a perfect blend of learning, testing, and collaboration on cutting-edge AI Safety research. I really feel that I gained practical knowledge that cannot be learned only by reading articles.”

Lucie Philippon, France Pacific Territories Economic Committee: "It was great meeting such cool people to work with over the weekend! I did not know any of the other people in my group at first, and now I'm looking forward to working with them again on research projects! The organizers were also super helpful and contributed a lot to the success of our project.”

Akash Kundu, now an Apart Lab fellow: "It was an amazing experience working with people I didn't even know before the hackathon. All three of my teammates were extremely spread out, while I am from India, my teammates were from New York and Taiwan. It was amazing how we pulled this off in 48 hours in spite of the time difference. Moreover, the mentors were extremely encouraging and supportive which helped us gain clarity whenever we got stuck and helped us create an interesting project in the end.”

Nora Petrova, ML Engineer at Prolific: “The hackathon really helped me to be embedded in a community where everyone was working on the same topic. There was a lot of curiosity and interest in the community. Getting feedback from others was interesting as well and I could see how other researchers perceived my project. It was also really interesting to see all the other projects and it was positive to see other's work on it.”

Chris Mathwin, MATS Scholar: "The Interpretability Hackathon exceeded my expectations, it was incredibly well organized with an intelligently curated list of very helpful resources. I had a lot of fun participating and genuinely feel I was able to learn significantly more than I would have, had I spent my time elsewhere. I highly recommend these events to anyone who is interested in this sort of work!”

What if my research seems too risky to share?

Besides emphasizing the introduction of concrete mitigation ideas for the risks presented, we are aware that projects emerging from this hackathon might pose a risk if disseminated irresponsibly.

For all of Apart's research events and dissemination, we follow our Responsible Disclosure Policy.

Speakers & Collaborators

Marius Hobbhahn

Marius is the CEO of Apollo Research, a research non-profit that specializes in creating evaluations for deception.
Co-organizer

Archana Vaidheeswaran

Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Organizer

Esben Kran

Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Organizer

Jason Schreiber

Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.
Organizer

Rudolf Laine

Author of the situational awareness benchmark and an independent AI safety researcher working with Owaine Evans.
Reviewer

Jacob Haimes

Author of the unpublished retro-holdout paper about evaluation datasets that have leaked into the training set and a fellow at the Apart Lab. Hosts a podcast on AI safety.
HackTalk speaker

Kunvar Thaman

RE in mechanistic interpretability and former cybersecurity engineer. Author of the "Benchmark Inflation" paper quantifying LLM performance gaps. Apart Research Fellow.
Reviewer

Henry Sleight

Henry is a scholar support specialist at the MATS program and has supported Ethan Perez' research assistants in AI safety research. Board member of LISA.
Reviewer

Natalia Pérez-Campanero Antolín

Soon to be a research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.
Judge

Mikita Balesni

Mikita is a research scientist at Apollo Research and works on evaluation of dangerous language model capabilities. He helped Ukraine's cyber front against the Russian invasion
HackTalk speaker

David Matolcsi

David is a theoretical AI safety researcher at Alignment Research Center where he works on Paul Christiano's agenda. His background is in mathematics.
Reviewer

We have collected a few exciting resources about deception, both to get you started with research in this area but also to dive even deeper. Required reading:

Optional research articles on concepts related to deception detection:

  • Research attempting to detect deception in AI systems
    • The Situational Awareness Benchmark (Laine et al., 2024) takes steps towards understanding when language models know which situation they're in, a property important to be able to execute strategic deception, such as sandbagging
    • The GDM frontier evals paper (Phuong et al., 2024) includes a section on autonomy and includes good tasks to test applied situational awareness
    • Burns et al. (2024) use a form of PCA on neural network activation to identify whether a language model is lying or not
    • Zou et al. (2023) use neural population modeling to develop "representation engineering", a method to understand a neural network's representation of high-level concepts, such as "honesty", "morality", and "emotions"
    • Azaria & Mitchell (2023) classify whether a statement is true or false based on activations in a language model, leading to an understanding of when a model might be lying
    • Pacchiardi et al. (2023) use a black-box method (where we don't have access to model internals) to ask unrelated follow-up questions and use the model's answers to understand whether it's lying or not
    • Kuhn et al. (2023) measure the uncertainty of a language model's responses
    • Anthropic (2024) uses "linear probes" (simple statistics on neural network activation) to catch sleeper agents (see the definition in the paper further down)
  • Research introducing methods for reducing deception and trustworthy design
    • Quirke et al. (2024) introduce an example mitigation where we "install" verified circuits into neural networks to trust a specific sub-process in a network
    • Li et al. (2023) use activation steering to increase honesty and truthfulness in a Llama model after deployment (inference-time)
    • In Hubinger et al. (2024), Anthropic shows that so-called "Sleeper agents" (LLMs trained to be deceptive) are very difficult to inspect and remove harmfulness from (w/ involvement from Apart)
  • Related research work exploring the frontier of capabilities that are potentially required for deception or is useful for your work during the weekend
    • Kinniment et al. (2024) describes methods to use challenges to evaluate models for the ability for autonomy and autonomous research and development - you might find inspiration for developing agent architectures to test for deception in this (and possibly find this METR research useful as well)
    • AISI published their fourth progress report with evaluation metrics on various models for 1) ability to program and do cyber operations, 2) show knowledge of chemistry and biology, 3) ability to work autonomously, and 4) security against malicious attacks.
    • AISI also published inspect, a framework for large language model (LLM) evaluations, that might be useful for your work
    • Woodside et al. (2023) published an updated list of examples where AI is used to improve AI systems, showing an example of a collaborative literature review project

Optional reading about the potential for deception in superintelligent systems:

  • In this work by Hubinger et al. (2021), they explore the theoretical tendency of neural networks to have internal goals that might not be aligned with what the creator wants
  • Carlsmith (2023) explores the high possibility that machine learning actually incentivizes deception / scheming
  • Weij et al. (2024) introduces the concept of "sandbagging" in AI systems, the potential for models to strategically underperform during evaluation to fool an auditor or engineer
  • Scheurer (2024) is an Apollo Research project presented at the AI Safety Summit in UK, showing a preliminary demonstration of an LLM strategically deceiving humans

Additionally, we have previously hosted hackathons where related work was submitted. You can find a few examples here to find inspiration for where you might be able to take your project during the weekend. You can find examples on the Sprints page.

See the updated calendar and subscribe

The schedule runs from 4 PM UTC Friday to 3 AM Monday. We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here.

You will also find Explorer events such as collaborative brainstorming and team match-making before the hackathon begins on Discord and in the calendar.

📍 Registered jam sites

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

WhiteBox Research - Manila Node of AI Deception Hackathon

WhiteBox is hosting the Manila node of the hackathon at Openspace Katipunan in 50 Esteban Abada St. on June 29-July 1. Join us!

WhiteBox Research - Manila Node of AI Deception Hackathon

EA Tech London @ London Initiative for Safe AI: Deception Detection Hackathon

We'll be hosting a jam site at the LISA offices in Shoreditch (25 Holywell Row, London EC2A 4XE). Hang out and collaborate with others interested in, and working on, AI safety!

EA Tech London @ London Initiative for Safe AI: Deception Detection Hackathon

AI Safety Initiative Groningen (aisig.org) - Deception Detection Hackathon

We will be hosting the hackathon at Hereplein 4, 9711GA, Groningen. Join us!

AI Safety Initiative Groningen (aisig.org) - Deception Detection Hackathon

🏠 Register a location

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received! Your event will show up on this page.
Oops! Something went wrong while submitting the form.

📣 Social media images and text snippets

Keynote Speaker Deception Detection Hackathon Social Media Square
Deception Detection Hackathon GIF Square
Deception Detection Hackathon Media 2
Deception Detection Hackathon Media 1
No text snippets added yet

Use this template for your submission [Required]

For your submission, you are required to put together a PDF report. In the submission form below, you can see which fields are optional and which are not.

Review criteria

The judging criteria for your submission are:

  • Deception: Is your project inspired and motivated by existing literature on deception? Does it represent significant progress in detecting and/or mitigating deception?
  • AI Safety: Does your project seem like it will contribute meaningfully to the safety and security of future AI systems? Is the motivation for the research good and relevant for safety?
  • Generalizability / Reproducibility: Does your project seem like it would generalize; for example, do you show multiple models and investigate potential errors in your detection? Is your code available in a repository or a Google Colab?
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
Oops! Something went wrong while submitting the form.
projects are currently under review! The lightning talks will be held live when the timer gets to zero. Thank you for joining us.

Email Friday June 28

👀 Join us for the keynote in an hour!

Now we're just one hour away from the kickoff where you'll hear an inspiring talk from Marius Hobbhahn in addition to Esben's introduction to the weekend's schedule, judging criteria, submission template, and more.

It will also be livestreamed for everyone across the world to have the keynote available during the weekend. After the hackathon, the recording of Marius’ talk will be published on our YouTube channelso you can revisit it.

We've had very inspiring sessions during this past week and we're excited to welcome everyone else to get exciting projects set up! Even before we're beginning, several interesting ideas have emerged on the #projects | teams forum in our community server where you can read others' projects and team up.

We look forward to see you in an hour! Here's the details:

  • Time: The event happens at 18:00 CEST, 9:00 PST, 16:00 UTC, 23:00 ICT and we'll have a short buffer at the start to make sure everyone gets in to follow along
  • 🙋 Q&A: After Marius’ talk you'll have the chance to raise your virtual hand and ask any questions before Esben gives his introduction to the weekend
  • 🔗 Location: Talks & Keynotes voice channel
  • 👋 Community server: Join the Discord server at this link if you haven't already
  • 📆 Schedule: The schedule is on the hackathon website but you can also subscribe to have it available in your personal calendar

Besides our online event where many of you will join, we also thank our locations across the globe joining us this time! London, Manila, and Groningen 🥳 The Apart team and multiple active AI safety researchers will be available on the server to answer all your questions in the #help-desk channel.

See you soon!

🤗 The Organizing Team

Email Wednesday June 26

🥳 We're excited to welcome you for the Deception Detection Hackathon this coming weekend! Here's your short overview of the latest resources and topics to get you ready for the weekend.

For the keynote, we're delighted to present Marius Hobbhahn from Apollo Research who will be inspiring you with a talk on AI deception detection and their research.

It will all be happening in the Talks & Keynotes channel on Discord and on our YouTube livestream.

Check out this 5-minute video with the most relevant hackathon information:

🚀 Kicking off the hackathon

To make your weekend productive and exciting, we recommend that you take these two steps after we begin:

  1. 🤓 Read or skim through the resources above (30 minutes to a couple of hours)
  2. 💡 Uncritically brainstorm ideas, select among them, and share them with others in the hackathon to get a team together and submit a great project!
    • Share your ideas in the #projects | teams channel to find other brilliant minds to collaborate with, discuss your ideas, and get feedback for your ideas from our mentors

🏆 Prizes

For more information about the prizes and judging criteria for this weekend, jump to the hackathon website. TLDR; your projects will be judged by our brilliant panel and receive feedback. Next Thursday, we will have the Grand Finalé where the winning teams will present their projects and everyone is welcome to join. These are the prizes:

  • 🥇 $1,000 to the top project
  • 🥈 $600 to the second place project
  • 🥉 $300 to the third place project
  • 🏅 $100 to the fourth place project

🙋‍♀️ Questions

You will undoubtedly have questions that you need answered. Remember that the #❓help-desk channel is always available and that the organizers and mentors will be available there.

✊ Let's go!

We really look forward to the weekend and we're excited to welcome you with Marius and Apollo on Friday on Discord and YouTube!

Remember that this is an exciting opportunity to connect with others and develop meaningful ideas in AI safety. We're all here to help each other succeed on this remarkable journey for AI safety.

We'll see you there, research hackers!

The Organizing Team