7 min read

Tides will ebb and flow / 2024-02-15

Highlights from recent news, including a few interesting new true-open models but lots of negative signs from other quarters of the open/AI culture wars.
Small boat anchored in a rocky inlet. Style is of vaguely asian watercolors. Generated by Midjourney.

Intro

It continues to be a busy time, and a complex one, for open or near-open AI. Let’s get to it.

Micro-essay, elsewhere

I just finished one public talk and one debate on open and AI, and wrote about them on the Tidelift blog. TLDR: regulated open is coming fast and hard, and we need to figure out how to deal with it.

A note I didn't really emphasize in the Tidelift blog post, but I suspect is going to be a recurring theme this year: the traditional open licenses and definitions mostly ignore or fudge the question of "what is legal" and "how do we think about the interaction of illegality and openness". That was a viable approach when the overwhelming majority of potentially-open software was clearly legal, and corner cases like export control restrictions and DeCSS could be mostly ignored. But in a world where much more "open" is going to be potentially highly-regulated, providers of open legal tools are going to need to think about how their tools interact with those legal regimes, and providers of open definitions are not going to be able to avoid questions like "authors restricted their code so that it could remain legal; do those restrictions violate our open definitions"?

Events

(All streaming unless otherwise noted)

  • Running from mid-February to mid-March, Open Future and Creative Commons are doing an interactive poll to help understand the future of open AI. You can participate a little or a lot, and all feedback gets aggregated in an interesting and valuable way, so please give it a try!
  • The Open Source Initiative’s draft open definition for AI continues to iterate, now at version 0.0.5. There’s also now a forum for discussion where you can participate.
  • Discussion on model marketplace governance, at Berkman-Klein, March 6th. If we’re going to do open models, we’re eventually going to also do the equivalent of today’s open package managers, so thinking now about how those are governed is interesting and important.

New models

Some very nice releases just this week in new open models:

  • Aya, from CohereAI: focused on translation; Apache-2 licensed including data. I have some grumbles about Apache as a license for linguistic data, and we still don’t know what attribution even means for training data, but… at least it isn’t another custom license.
  • Olmo, from Allen AI: Apache-2, with a very full suite of components, including data, and (critically) evaluation framework. The press release is press-release-y, but does a pretty good job hitting many of the high points of why open is important and powerful for society.
  • Not-actually-open, but a useful contrast, from Meta: Mark Zuckerberg pretty much led the Meta earnings call with a long lecture about the value of open AI. Quite in contrast with Allen Institute’s discussion of actual-open’s value to society, and befitting an earnings call, Zuckerberg’s primary focus was on the benefit of near-open to Meta. His argument is coherent, detailed, and critically, not new. It is essentially a re-hash of all the arguments used for the past two decades to justify big tech’s investment in open. No guarantee, of course, but that continuity between the old arguments and the new suggests that at least some parts of the industry will continue to see open as an investment worth making (and/or co-opting).

New Transparency Regulation

Not actually new, but it was pointed out to me this week that buried deep in the EU AI Act's transparency rules is a requirement that models must "document and make publicly available a summary of the use of copyrighted training data". That requirement will be taking effect in 2025, but the minimum requirements for that transparency reporting are going to be negotiated Very Soon Now.

That's potentially a very interesting leverage point where open activists could push away from "minimum required training transparency" to "maximally plausible training transparency". I look forward to seeing how that plays out.

Open (or not) values in AI

In this section: what values have helped define open? are we seeing them in ML? Bad news: almost every example in this edition is about how the values in AI (not specific to open) are going the wrong way: rising barriers, narrowing participation, etc. 😬

Lowers barriers to technical participation

Some folks have been talking for a while about giving governments very deep hooks into our computing infrastructure in order to, essentially, allow for kill switches on “bad” models—in some ways the ultimate “barrier to technical participation”, akin to the Clipper Chip or Great Firewall of China. 

This paper from a variety of authors (including OpenAI, UK academics, and one of my favorite law school profs) is a deep and mostly thoughtful dive on the problem.

On the plus side, unlike much other writing in this space, the paper does not ignore the civil liberties problems inherent in such proposals, nor does it simply assume that AGI is going to kill humankind, therefore all current civil liberties are trivial.

But I think it is still ultimately somewhat shallow and handwave-y about the scope of the civil liberties problem here. One way of thinking about civil rights guarantees in democracies is that they knowingly introduce friction into governance, in order to protect individual freedom. (Think, for example, of Miranda warnings.) In other words, we knowingly accept that all forms of regulation will be limited, because we know that the alternative is very aggressive overreach. There’s a lot to criticize in that model, and it requires a lot of complex and imperfect compromises, but it has worked pretty well overall for human flourishing. This approach potentially throws that away in computing, over extremely hypothetical risks, and we should be careful about accepting the premises it starts from—not just because of the negative impact on open!

Enables broad participation in governance

Today in “if you can’t align a non-profit with society, you probably can’t align AI with society”, OpenAI promised to make governance documents open and then... didn’t

In very related news, I wish I’d written Public Citizen’s letter to the California Attorney General about OpenAI’s non-profit status. So far Rob Bonta seems to be completely abdicating his role as a regulator of California non-profits. I suspect dissolving OpenAI altogether (which Public Citizen calls for) is a stretch, but it’s absolutely not a stretch to investigate what steps are being taken to ensure that the organization reflects the public interests it is charged with.

To be clear, the rest of open should not be taking a victory lap here: non-profits in the actually open space aren’t necessarily models of great governance. But the public nature of the code does introduce some discipline, where volunteers and donors can walk away and restart the organization instead of the internecine warfare we all saw play out in public in December.

Improves public ability to understand (transparency-as-culture)

Not specific to open AI, but I really liked this observation on how the English language makes it harder for us to collectively reason about AI: 

Xandra Granade 🏳️‍⚧️ (@xgranade@wandering.shop)
Something I’ve noticed in talking about “AI” is how difficult it is not to bias towards agent-first language, such as “the LLM thinks that.” In a way, it’s an odd parallel to the problem of trying to talk about evolutionary pressure without falling back on “evolution tries to” or similar phrases. At least personally, I find that English makes it tricky to describe objective functions or patterns in stochastic processes without at least sometimes using agent-first language.

I wish I could follow this up with tips on how to write/talk better about it but no luck yet.

Somewhat related, here’s a new frontier in UX that seems to both deliberately deceive the user about its humanity, and furthermore seems to almost deliberately make it hard for humans to understand what is going on?

Makes the world better

Again in “all I have this edition is counter-examples”, it was impossible to escape Taylor Swift’s fake nudes in discussions of AI the past couple of weeks. If you were under a rock, here’s a good backgrounder.

These were clearly a case of AI not making the world better, but I think they also highlighted that our society has not tried at all to make this sort of thing illegal, or enforce laws in the few cases where it is illegal. We should probably try doing that first, for all the many pre-LLM deepfakes tools and deepfake tool users, before pre-emptively locking down AI.

Shifts power

This week in “socialize the costs, privatize the benefits”, an op-ed on why we should be careful to scrutinize the new wave of “public-private partnerships” in AI. This to me is one of those things where open should be a no-brainer: public money, public code.

Balancing who gets what when capital is needed is not exclusively a public-private problem, of course; Google has announced a partnership with Hugging Face to provide compute, and will surely have structured the deal to benefit from customer compute usage while putting a lot of the risk on the startup.

Misc.

  • I’m extrapolating only a little bit from this funny-smart blog post when I say that, soon enough, every grain of sand on earth will be training the next version of ChatGPT. Since that’s obviously not possible, it does suggest a lot of money is about to get poured into optimizing training - hopefully to the benefit of open AI.
  • Mozilla Foundation report on Common Crawl: biggest recurring themes are governance and attribution. In other words, open is an important start but it isn't enough.
  • It has been clear that collecting societies, a traditional tool for artists to do collective licensing, are going to have a moment. Looks like that is starting in earnest now, in France, with a collecting society opting out of TDM for all of its members. Unclear if that is actually a thing the society can do, but they're going to try.
  • I’m in favor of mechanisms for incentivizing edges of the network, and I’m in favor of decentralizing training, but... not sure about doing it on the blockchain. May be one of those rare somewhat-sensible use-cases, though.