– Over 70 signatories have called for a more open approach to AI development.
– The letter emphasizes the need for openness, transparency, and broad access in AI governance.
– The debate between open source and proprietary AI is ongoing.
– Some argue that open AI can be manipulated by bad actors, while others believe that scaremongering is used to concentrate control in the hands of a few companies.
– The open letter highlights the benefits of openness in enabling independent research, increasing scrutiny and accountability, and lowering entry barriers for new entrants.
– The letter argues that tight and proprietary control of AI models is not the only path to protecting society from harm.
– Notable names, including Yann Lecun, Andrew Ng, Julien Chaumond, and Brian Behlendorf, have attached their names to the letter.
– Open models can inform an open debate and improve policy making for safety, security, and accountability in AI development.
On the same day the U.K. gathered some of the world’s corporate and political leaders into the same room at Bletchley Park for the AI Safety Summit, more than 70 signatories put their name to a letter calling for a more open approach to AI development.
“We are at a critical juncture in AI governance,” the letter, published by Mozilla, notes. “To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.”
Much like what has gone on in the broader software sphere for the past few decades, a major backdrop to the burgeoning AI revolution has been open vs. proprietary — and the pros and cons of each. Over the weekend, Facebook parent Meta’s chief AI scientist Yann Lecun took to X to decry efforts from some companies, including OpenAI and Google’s DeepMind, to secure “regulatory capture of the AI industry” by lobbying against open AI R&D.
“If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI,” Lecun wrote.
And this is a theme that continues to permeate through the growing governance efforts emerging from the likes of President Biden’s Executive Order and the AI Safety Summit hosted by the U.K. this week. On the one hand, heads of large AI companies are warning about the existential threats that AI poses, arguing that open source AI can be manipulated by bad actors to more easily create chemical weapons (for example), while on the other hand counter arguments posit that such scaremongering is merely to help concentrate control in the hands of a few protectionist companies.
The truth is probably somewhat more nuanced than that, but it’s against that backdrop that dozens of people put their name to an open letter today, calling for more openness.
“Yes, openly available models come with risks and vulnerabilities — AI models can be abused by malicious actors or deployed by ill-equipped developers,” the letter says. “However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.
Esteemed AI researcher Lecun — who joined Meta 10 years ago — attached his name to the letter, alongside numerous other notable names including Google Brain and Coursera co-founder Andrew Ng; Hugging Face co-founder and CTO Julien Chaumond; and renowned technologist Brian Behlendorf from the Linux Foundation.
Specifically, the letter identifies three main areas where openness can help safe AI development, including through enabling greater independent research and collaboration; increasing public scrutiny and accountability; and lowering the barriers to entry for new entrants to the AI space.
“History shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation,” the letter notes. “Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.”
AI Eclipse TLDR:
On the same day as the AI Safety Summit in the UK, more than 70 signatories, including notable figures in the AI field, published an open letter calling for a more open approach to AI development. The letter, published by Mozilla, states that embracing openness, transparency, and broad access to AI systems is crucial to mitigate current and future harms. The debate over open vs. proprietary AI has been ongoing, with some arguing that open source AI can be manipulated by bad actors, while others believe that scaremongering is meant to concentrate control in the hands of a few protectionist companies. The letter emphasizes that public access and scrutiny make technology safer and calls for greater independent research, collaboration, public scrutiny, and accountability in AI development. It also highlights the importance of lowering barriers to entry for new entrants in the AI space. The signatories include Facebook Meta’s Chief AI Scientist Yann Lecun, Google Brain and Coursera co-founder Andrew Ng, Hugging Face co-founder Julien Chaumond, and technologist Brian Behlendorf from the Linux Foundation. The letter concludes that openness and transparency are crucial for achieving safety, security, and accountability in AI.