With help from Derek Robertson
One of the most pie-in-the-sky corners of the crypto world is suddenly going corporate.
Last week, SushiDAO, the group that controls popular crypto exchange SushiSwap, voted to convert itself into a complex new corporate structure that is a touch more creative: an interrelated group of two foundations and a corporation spread across Panama and the Cayman Islands.
For anyone following the development of DAOs, this is the kind of thing they were supposed to prevent, or at least replace: Their decentralized structures and collective governance were intended to change the whole landscape of traditional corporate ownership and control.
But DAOs have come under more legal scrutiny of late — and SushiDao’s tropical re-registration amounts to a concession of that reality.
Though DAO backers make the case that they are a new kind of group that should play by new rules, regulators and plaintiffs’ lawyers don’t always agree. In some cases they’ve started seeking to prove that some of them are just conducting unauthorized financial activity, and that their members are liable for anything that goes wrong.
As a result, a cottage legal industry has sprung up to register DAOs in the same tropical jurisdictions long favored by traditional corporations.
DAOs are blockchain-based groups that often delegate voting authority to holders of crypto tokens in a way that is roughly analogous to the voting rights of corporate shareholders. In theory, they could shake up the way people approach all sorts of endeavors — from climate action to space exploration — by allowing large groups to coordinate online without any centralized oversight. Or at least that’s what their backers argue.
In practice, many DAOs today remain under the effective control of small groups of founders, and the largest ones tend to operate in the prosaic world of finance.
In the wake of the crypto market’s spring meltdown, the focus of many founders has shifted from revolutionizing finance to addressing their own legal risk. For some DAOs that began in the digital equivalent of their founders’ garages — with a Discord chat and a custom-issued crypto token — that has meant registering as an old-fashioned corporation.
Since the CFTC became the first regulator to sue an entire DAO last month, alleging it had failed to comply with commodities trading laws, the scramble to register has accelerated as DAO participants realize they are in uncharted legal waters, according to Nicholas Saady, a blockchain-focused attorney at Pryor Cashman.
The suit has raised the prospect that legal troubles could affect not just founders, but anyone who bought or voted with a group’s governance tokens, signaling an urgent need for liability protection.
“It is hard for a DAO member to know when, how, for what conduct, or under what legal theory they may be held liable,” Saady wrote in an email.
While hundreds of self-described DAOs have incorporated in Wyoming since the state created a special category for them last year, many others are going to favorite offshore jurisdictions like Panama and the Cayman Islands. (SushiDAO is setting up entities in both.)
Some DAOs have been drawn in particular to a unique entity offered in the Caymans called a foundation company, because that offers a legal structure without an owner. But the legal protections that corporations might offer to DAOs remain untested.
And while American founders can run from regulators, they can’t hide for long, according to Max Dilendorf, a New York lawyer who specializes in crypto issues.
Dilendorf said that long before crypto existed, U.S. regulators proved they could pierce the offshore corporate shells used by American tax-dodgers. Similarly, he said that if U.S. regulators go after DAOs’ offshore foundations, they will be able to show that many of them remain under the effective control of founders who can be held liable for the DAOs’ activities.
“When I hear ‘Panama DAO’ I’m like, ‘OK, really?,’” he said. “That structure will not survive through a discovery phase.”
Dilendorf said that his skepticism has been heightened by the engagement letter he’s seen drafted by law firms offering offshore DAO incorporation services. The letters, he said, lacked assurances that the structures would pass muster with U.S. regulators.
“What do you guarantee as lawyers? Nothing” he said. “It’s not like they’ll be dealing with DOJ if anything goes wrong.”
We talk in this newsletter about the EU’s approach to tech regulation, and about China’s role in the global competition for tech supremacy. But how do the two match up with each other—and what does that mean for political and policy leaders trying to oversee these industries across the globe?
A recently published study from a group of Oxford researchers attempts to answer those questions by comparing the EU and China’s approaches to AI ethics. The researchers point out that because, of course, Western and Chinese conceptions of “ethics” and value systems differ greatly, there are valuable lessons to be learned from seeing where the two systems don’t overlap and what each might have to offer the other.
One major finding: Both systems are ill-equipped to incorporate the kind of user-based, community feedback that many AI researchers and ethicists insist is necessary to minimize harm.
The researchers cite heavy industry involvement in shaping the EU’s AI Act, and conversely, a “restricted interest group ecosystem” in China. They prescribe an experiment with “citizens assemblies on AI that represent the full diversity of China and the EU respectively,” citing projects like the U.K.’s Citizens’ Biometrics Council that provide training for everyday citizens on how to deliberate over the use of new technologies in daily life. — Derek Robertson
While the EU might have a stronger regulatory grip on Big Tech’s bridle than the U.S., it’s not entirely without cooperation from the industry they’re trying to rein in.
And nowhere is that more evident than in the EU’s nascent rules for AI — which is starting to worry some activists and technologists, as POLITICO’s Clothilde Goujard and Gian Volpicelli reported today for Pro subscribers.
As they write, industry is involved by design and somewhat inescapably, with the AI Act “lean[ing] on industry forums, such as CEN-CENELEC and ETSI, to outline the technical instructions that ensure AI systems are trained on unbiased data and ultimately determine how much human oversight is needed.”
Standards groups like those mentioned have been important interlocutors with government and industry for making sure that regulations are put in place in a manner that simply makes sense technically, without “favoring” either side. But as Clothilde and Gian report, many researchers are skeptical that ethics will outweigh simple financial calculations in deciding what the standards for AI should be.
As Michael Veale, an associate professor in digital rights and regulation at University College London, said in no uncertain terms: AI is simply too sophisticated and powerful “to be fixed by a piece of legislation that treats it like a toy, a radio or a piece of protective equipment.” — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
Ben Schreckinger covers tech, finance and politics for POLITICO; he is an investor in cryptocurrency.