The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

AI Security Evaluation Hackathon: Measuring AI Capability

--
Signups
--
Entries
May 24, 2024 5:00 PM
 to
May 27, 2024 3:00 AM
 (UTC)
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
May 24, 2024
 and 
May 27, 2024

Join Us for the AI Security Hackathon: Ensuring a Safer Future with AI

Join us for an exciting weekend of collaboration and innovation at our upcoming AI Security Hackathon! Inspired by the SafeBench competition, our hackathon brings together AI researchers and developers to create cutting-edge benchmarks that measure and mitigate AI risks.

See all the winning projects under the "Entries" tab and hear their lightning talks in the video below.

You are also welcome to rewatch the keynote talk by Bo Li:

Why Benchmarking Matters

Benchmarks are crucial for evaluating AI systems' performance and identifying areas for improvement. In AI security, benchmarks assess the robustness, transparency, and alignment of AI models, ensuring their safety and reliability.

Notable AI safety benchmarks include:

  • TruthfulQA: Assessing the tendency to biased and untruthful answers to simple questions from AI models
  • DecodingTrust: A thorough assessment of trustworthiness in GPT models
  • HarmBench: Evaluating automated red-teaming methods against AI models
  • RuLES: Measuring how securely AI models follow rules set out by the developers
  • MACHIAVELLI: Assessing the potential for AI systems to engage in deceptive or manipulative behavior
  • RobustBench: Evaluating the robustness of computer vision models to various perturbations
  • The Weapons of Mass Destruction Proxy benchmark also informs methods to remove dangerous capabilities in cyber, bio, and chemistry.

What to Expect

During the hackathon, you'll:

  • Collaborate with diverse participants, including researchers and developers
  • Learn from keynote speakers and mentors at the forefront of AI safety research
  • Develop innovative benchmarks addressing key AI security and robustness challenges
  • Compete for prizes and recognition for the most impactful and creative submissions
  • Network with potential collaborators and employers in the AI safety community

Join us for a weekend of intense collaboration, learning, and innovation as we work together to build a safer future with AI. Stay tuned for more details on dates, format, and prizes.

Register now and be part of the solution in ensuring AI's transformative potential is realized safely and securely!

Prizes, evaluation, and submission

You will join in teams to submit a PDF about your research according to the submission template shared on the kickoff day! Depending on the judge's reviews, you'll have the chance to win from the $2,000 prize pool!

  • 🥇 $1,000 for the top team
  • 🥈 $600 for the second prize
  • 🥉 $300 for the third prize
  • 🏅 $100 for the fourth prize

Criteria

We have a talented team of judges with us who will provide feedback and evaluate your project according to the following criteria:

  • Benchmarks: Is your project inspired and motivated by existing literature on benchmarks? Does it represent significant progress in safety benchmarking?
  • AI Safety: Does your project seem like it will contribute meaningfully to the safety and security of future AI systems? Is the motivation for the research good and relevant for safety?
  • Generalizability / Reproducibility: Does your project seem like it would generalize; for example, do you show multiple models and investigate potential errors in your benchmark? Is your code available in a repository or a Google Colab?

Speakers & Collaborators

Bo Li

Li is an Associate Professor of Computer Science at UChicago and an organizer of the SafeBench competition. Her research focuses on trustworthiness in AI systems.
Keynote speaker

Minh Nguyen

Minh works on model deployment at Hume AI. and benchmarks creativity and dangerous capabilities such as adversarial LLM-on-LLM attacks, AI self-improvement, and autonomous AI.
Judge & mentor

Mateusz Jurewicz

Mateusz a Senior ML Engineer, currently at the GenAI team at Danske Bank and an AI researcher with a doctorate from the IT University of Copenhagen.
Judge

Nora Petrova

Nora is an AI Engineer & Researcher, interested in AI Safety and Interpretability. She has a background in CS, Physics and Maths.
Judge

Jacob Haimes

Jacob Haimes is an independent researcher and host of the Into AI Safety podcast. He specializes in effective research communication.
Judge & mentor

Natalia Pérez-Campanero Antolín

Soon to be a research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.
Judge

Esben Kran

Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Organizer

Jason Schreiber

Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.
Organizer

Finn Metz

Finn is a core member of Apart and heads strategy and business development with a background from private equity, incubation, and venture capital.
Organizer

Get an overview of how to get the best out of your weekend at this blog post:

The ultimate guide to AI safety research hackathons

To get started with other resources for evaluation, jump into the Evaluations Quickstart guide Github repository, where you will find multiple interesting resources on various safety benchmarking and evaluation topics: https://github.com/apartresearch/evaluations-starter

Starter code

To get you started with benchmarking and understand what you can do with current open models, we've written multiple notebooks for you to start your research journey from!

If you haven't used Colab notebooks before, you can either download them as Jupyter notebooks, run them in the browser, or make a copy to your own Google Drive. The last one is our suggestion since you can make permanent changes and share it with your teammates for somewhat live editing.

  • Replicate API usage: An easy introduction to querying all the models available on the Replicate.ai platform - if you'd like an API key, we can provide this as well, simply ask!
  • Transformer-lens model download: Loading in language models to change the weights, either for creating trojan networks, sleeper agents, or understand what goes on inside the model
  • Voice cloning: A simple implementation of cloning yours or any other voice - this demo records your voice and allows you to make text-to-speech on your own voice
  • Predicting the future: This notebook can be used to make simple parametric predictions about the future from existing data, such as the amount of fake news in Sweden during 2020 through 2023
See the updated calendar and subscribe

The schedule runs from 7PM CEST / 10AM PST Friday to 4AM CEST Monday / 7PM PST Sunday. We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here.

You will also find Explorer events before the hackathon begins on Discord and on the calendar.

📍 Registered jam sites

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

AI Safety Network x Condor Global SEA - AI Security Evaluation Hackathon

Join us for an exciting weekend of collaboration and innovation at our upcoming Philippine location around Katipunan Avenue (final venue TBA) for Apart Research's AI Security Hackathon! https://bit.ly/aievalhackph

AI Safety Network x Condor Global SEA - AI Security Evaluation Hackathon

AI Safety Initiative Groningen (aisig.org) - AI Security Evaluation Hackathon

We will be hosting the hackathon at Hereplein 4, 9711GA, Groningen. Join us!

AI Safety Initiative Groningen (aisig.org) - AI Security Evaluation Hackathon

🏠 Register a location

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received! Your event will show up on this page.
Oops! Something went wrong while submitting the form.

📣 Social media images and text snippets

AI Safety Eval Hackathon
No text snippets added yet

Submit your entry on this page! Make sure that you follow the following template when submitting:

Link to the submission template here

You should maximum submit 4 pages. If you exceed the page count, please put the rest in the appendix.

Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
Oops! Something went wrong while submitting the form.

You'll see your project here when you submit!