✨💑 New_ Public’s AI framework: Learn in public, keep humans centered
Everything on our new policy on AI use, from "secret cyborgs" to "start-up mindset"
Help us with a pulse survey on AI and innovation here. More info below.
Join us for our upcoming virtual event on 11/20 at 4pm EST / 1pm PST, where Co-Founder Talia Stroud will be presenting our new nationwide poll results on how Americans feel about their local digital spaces.
Let’s continue our exploration of artificial intelligence. I do, genuinely, appreciate your partnership and interest in taking on this topic, because it is so huge and so complex.
It’s hard to miss the swirling miasma of shame and stigma around AI, coming from many different directions.
Some people are feeling pressure to use these new tools, maybe from their boss or their peers, and they are conflicted about participating in something they see as a negative force for society or humanity. Others are what Ethan Mollick calls “secret cyborgs” — excited to dive in and experiment, but doing so in secret because of social pressure. And of course, there are folks (you’ve undoubtedly seen them on LinkedIn) who are quite outspoken about their embrace or rejection of AI. For critics of social media in public interest tech, the stakes feel extra high and urgent.
Recently, I’ve been talking to Dan Be Kim, Translational Research Designer at the Harvard Center for Digital Thriving, and an expert working with New_ Public to shape our thinking around use of AI tools. Dan Be says that these complex social dynamics require a great deal of transparency and openness, especially from institutions.
“There’s this incredible nuance to AI use, because it’s largely also influenced by our lived experiences and value systems,” Dan Be says. “It’s never been more important to talk about our tech use and how we’re approaching it.”
With Dan Be’s help, New_ Public has created the first version of a policy about our own AI use – it will be a longform, living document where we can share our thinking as it evolves. Below, I’ll preview some of what we’re sharing, with some extra context from Dan Be.
But first, we need your help in thinking about the future of technology. Your thoughts will guide our research, as Co-Director Eli Pariser explains below.
– Josh Kramer, Head of Editorial, New_ Public
Help us with a pulse check
The New_ Public team is exploring how AI and other new technologies and behaviors are reshaping the social fabric of the internet: how people connect, converse, and build community online. This will help both inform our product work and our strategy, and be developed into a resource for others building in the space. And we need your help!
As part of this work, we’re collecting signals — the small, surprising, weird (or delightful) things you’ve observed online recently.
What innovation are we missing? What vital experiments should we be sure not to miss? Your observations will really help us understand what’s alive and worth paying attention to right now, and we’ll share some of your ideas and responses in the report we’re putting together in February on this.
Please share with us here. Thanks for your time and your attention.
– Eli Pariser, Co-Director, New_ Public
AI’s Impact
That stigma I mentioned up top? It’s deeply connected to a long list of concerns people have about how AI may already be transforming the world. In our policy, we identify potential outcomes for the environment, culture, economics, idea ownership, business models, and likely far more:
We are thinking seriously about how AI will replace jobs, and how people will re-evaluate their sense of self-worth as AI takes on more human tasks. The creation and maintenance of these models often depends on extractive labor — a few companies, owned by billionaires, benefit from users training their models for free and from unethical data labeling practices that exploit vulnerable populations.
We think it’s essential to be informed about the impacts and potential risks, even as we experiment with some of these tools — which for us means being a critic, rather than a skeptic. As Dan Be says, the key difference is that while a skeptic might disengage, a critic leans in, stays informed, and asks the hard questions.
“You need to actually have a lot of awareness to make a decision to discern how this is affecting you, how this is relevant for you as both as an individual and organization,” Dan Be says. We want to experience the capabilities and limitations of these models firsthand and determine whether they are worth using.
What we want to try
The Tl;dr of the policy reads:
We are experimenting with using AI tools to extend our work as a small nonprofit, so that we can focus our time on reinforcing human connections, conversations, and communities that have eroded.
We’re working on doing that in a number of ways. First, we’ve seen how much further our project teams are progressing in developing new products and tools:
Our Local Lab team has been experimenting with vibe-coded prototyping of new tools and features. Designers, who may not have as much coding experience as engineers, can now experiment with wireframe demos, helping the team make more informed decisions about where to dedicate coding time and energy.
We’re also excited about the possibilities for AI to support community stewardship, and take some of the drudgery of traditional moderator or admin roles:
AI can ease the burden on human moderators and free them to focus on tasks that foster connection and build social trust. We’ve already seen that this approach can work, and there’s exciting potential to scale it dramatically.
And, as you have probably observed in your own digital communities, it’s far too easy for core institutional knowledge to slip between the cracks and vanish into the ether. AI can be a partner in resurfacing and synthesizing some of that information:
AI can help transform these conversations into useful, usable knowledge by assisting stewards in synthesizing long or messy threads into clear, accessible summaries or community-created guides.
These are some of the most promising threads we’re working on at the moment. We’re continuing to explore and challenge ourselves, and in the spirit of transparency, share what we’re trying — and in some cases failing — to accomplish with AI.
How we’ll do this
As we engage in AI, we will aim to move forward guided by our mission, theory of change, and our core beliefs. In addition, we’ve identified some guiding principles specific to AI (each spelled out in detail in the policy):
We learn in public through transparency.
We keep humans in the driver’s seat.
We navigate AI’s inherent biases.
We embrace friction and slow, intentional thinking.
We navigate nuance and complexity by cultivating a flexible, evolving approach.
We design for pluralism and center the public good.
We empower our team through continuous learning and open dialogue.
These probably won’t be right for every organization, and they might not even work in every situation we encounter over the next few years. We continue to value humility, flexibility, and patience.
But don’t assume we are “decelerationists.” Dan Be says a kind of “start-up mindset” is important in the age of AI:
What I mean by start-up mindset is a certain type of curiosity to identify a problem you care about and then having the agency, confidence, competence, and willingness to translate whatever ideas you have about that problem to address it and do something about it — and then having the resilience to deal with the complexity of actually coming up with the right solution.
Like I said at the top, we know this can be a sensitive, even emotional topic, and we are eager for your thoughts. Our door is open, so to speak, and we’d love to hear from you in the comments, or on our social channels.
Thanks for reading. Setting the clock back this weekend,
–Josh


