Dear Apart Community,
Welcome to our newsletter - Apart News!
At Apart Research there is so much brilliant research, great events, and countless community updates to share.
This week’s edition of Apart News looks at results from a recent hackathon, analysis about OpenAI's o1, and explains why some of our team are in Singapore.
Apart co-director on o1 Release
Last week saw the drop of OpenAI’s new o1 model (system card here). Our co-director, Esben Kran, had the following to say:
“It is fascinating that OpenAI has managed to achieve higher model performance through inference and it's an impressive example of the technology, though the development itself was somewhat expected.”
While Esben was more scientifically impressed by Claude 3.5 Sonnet, he noted that “o1 offers pragmatic advantages, particularly in its scalability and potential to support future iterations like GPT-5.” Esben also mentioned that the fundamental principles of this approach were likely established a year ago by groups like Google DeepMind.
However, he expressed concerns over OpenAI's apparent protection of the reasoning chain, and that if “this setup is easily replicated, it might introduce new security risks in real systems sooner than we are ready for. OpenAI simultaneously upped their risk classification for the model and these new algorithms makes expensive data sourcing less necessary, removing another bottleneck to intelligence.”
His biggest worry is that “this could be a direct path for AI to become an existential challenge, such as those posed by AI experts like Dan Hendrycks, which could present a troubling trajectory for AI safety within the next few years - and I’m not stoked about that.”
Remember when we said we're thinking more about technical AI safety for-profits?
We did just that at our AI Safety Startup Hackathon last week. In our write-up of the event 'Can startups be impactful in AI safety?', we explain that startup ideas wouldn't be a be-all-end-all solution to a specific problem but that safety and security is a complex question where a single problem (e.g. the chance of a rogue superintelligence escaping) can give a single company work for years.
With hackathon projects including graph neural network approaches to automated agent identification, safety evaluations for embodied systems, agent orchestration and control software, among much else, Esben concluded that at Apart we are “cautiously optimistic about the impact companies might have during the next years in AI safety.” And that he expects to see the following in an impactful AI safety startup:
- A profit incentive completely aligned with improving AI safety.
- A team with a fundamentally new idea that reshapes a part of AGI deployment.
- An idea that does not try to compete with the safety solutions AGI labs themselves would come up with.
A participant had this to say about the contest at its culmination: "[The AI Safety Startup Hackathon] changed my idea of what working on 'AI Safety' means [...] I went in with very little idea of how a startup can be a means to tackle AI safety and left with incredibly exciting ideas to work on."
Judging Singapore
We are excited to announce that our Community Program Manager, Archana Vaidheeswaran was a special Guest Judge at the Singapore government's Digital Services Awards on Thursday.
At Apart, Archana works tirelessly across the world to cultivate a community built on the conviction that AI safety has to become a global and highly focused effort.
Archana will be awarding the 'Outstanding Citizen Contributor Award by GovTech Singapore.' Check it out here.
The weeks ahead
- Concordia contest winners announced soon: Early next week we plan to announce the winners of the Concordia Contest Hackathon
- Keep an eye out for Apart's 4th October AI Agent Security Hackathon: 'How do we ensure that our safety research is state-of-the-art for the highest-risk AI actors, agents?'
Have a great week and let's keep working towards beneficial and safe AI.
'Apart is a global collective conducting impactful AI safety research' - and you're part of it.