AI for Good in Practice: A Conversation with Gabe Kopley and Felicia Curcuru
In the technology industry, innovation is often measured in speed. Faster models. Shorter deployment cycles. Bigger funding rounds. But in child welfare, where every data point represents a real child, a real family, and irreversible decisions, speed alone is not the metric that matters. Trust is.
Binti, a San Francisco–based technology company serving nearly half of America’s child welfare agencies, sits at the intersection of that tension. It builds modern SaaS software for social workers. It embeds AI into some of the most sensitive government workflows. And it does so under scrutiny that few private companies ever experience.
In October 2025, Binti marked a full-circle moment when Gabe Kopley, its co-founder, returned as Chief Technology Officer. After several years leading engineering teams at MuleSoft and Salesforce, Kopley returned to help guide Binti through its next phase: scaling AI responsibly across some of the most sensitive government systems in the country.

Kopley’s return follows Binti’s partnership with Anthropic, announced in August 2025. The collaboration, part of Anthropic’s AI for Good initiative, marks a first-of-its-kind deployment of Claude within child welfare systems.
Binti’s mission to redesign child welfare with responsible AI
Founded in 2017 by Felicia Curcuru, Binti now works with 550+ government agencies across 36 states, representing 49% of the U.S. child welfare system. Its software supports over 12,000 social workers and has touched the lives of more than 100,000 families.

Social workers face extreme caseloads. Agencies struggle with staffing shortages. Administrative work consumes nearly half of a social worker’s time. At the same time, governments are wary of AI systems that appear opaque, ungoverned, or misaligned with public values.
Binti’s answer has been a deliberate restraint.
The company describes its philosophy as AI-enabled, human-led. AI drafts. Humans decide. And data governance is treated not as a compliance exercise, but as a moral obligation.
The White House moment: From Startup to national policy table
The company’s momentum extends beyond product.
In early 2026, Binti signed the White House’s pledge to Fostering The Future For American Children And Families and formally expressed its support for the federal A Home for Every Child initiative. CEO Felicia Curcuru joined national leaders at a White House ceremony hosted by the First Lady Melania Trump, reinforcing Binti’s growing role as both a technology provider and a policy partner.

The four-year commitment includes launching internships for foster youth, expanding access to services, and giving families better digital tools.
Subscribe to our bi-weekly newsletter
Get the latest trends, insights, and strategies delivered straight to your inbox.
Against this backdrop, CTO Magazine sat with Curcuru and Kopley. What follows is for a wide-ranging conversation on leadership, trust, AI ethics, and what it means to build AI for good inside government systems.
Gabe, let’s start with you. Can you take us through your journey with Binti? How did you first get involved? What was your role during the early days?
Kopley:
So I co-founded Binti with Felicia, was a key part of the launch in San Francisco in 2017, and quickly grew our customer base to almost every county in California, plus the number of agencies across the country using our licensing product, which is our first greatest hit. We grew our revenue and raised money. There was a point at which I lost some confidence in myself. There’s a point at which I wondered whether another leader could do a better job than I could.
This is the first company I founded, and I actually took a break, went into big tech, and had a whole career in distributed systems at MuleSoft, including engineering leadership, growing a group, spawning teams, delivering really high operational performance, and shipping a number of very meaningful products.
The bottom line of Salesforce and MuleSoft. I had been in touch with Felicia over the years, and we stayed close. I was always very curious about how things were going with Binti. Felicia would check in, run things by me, and get my thoughts, and I tried to be helpful.
And fast-forward to today: what brought you back to Binti? How did your experience in big tech intersect with Binti’s evolving mission, especially as AI started becoming central to your platform?
Kopley:
From the outside, there came a point when Felicia was looking for a tech leader again, and I was keen to explore it, given my growth and Binti’s growth. It was an opportunity that could be a win-win.
So we did. I interviewed, we talked to the team, explored what challenges Binti was trying to solve. AI was very first and foremost, because Binti had launched their first AI product about a year ago. It was having significant traction. We also wanted to invest heavily in it, and I was looking at what I could bring to the table in terms of AI development tools for the engineering team at large.

This is a moment that all tech companies are going through, which is fairly amazing. These are the sharpest tools our industry has explored in my career. Nobody has the perfect answer for what we should be doing today and tomorrow to best use these tools to facilitate the development of incredible enterprise software that delivers a ton of user value and maximizes our engineers’ productivity.
Felicia, turning to you: as someone who co-founded a mission-driven tech organization in the child welfare space, what originally motivated you to start Binti? And now, in a world where AI is pervasive, how are you strategically integrating these tools while staying true to your mission?
Felicia Curcuru:
Yeah, definitely. My motivation for co-founding Binti came from a personal experience—my sister adopted two children, and the process was extremely difficult and stressful.
Through research, I learned that there are about 400,000 children in foster care, with 50% likely to experience homelessness at some point and 50% having interactions with the criminal justice system by the age of 17. It didn’t make sense that so many children were in need while there was a shortage of foster parents, and becoming a foster parent was so difficult. That was the original motivation.

Regarding AI, we see it as another tool in our toolbox. Our vision with Binti is to help every child have a family.
We’ve built specialized workflows for eight different teams of social workers, each addressing unique aspects of child welfare. AI is integrated to help social workers save time and increase their impact with children.
By reducing administrative work, it allows social workers to remain longer in the field and spend more time directly with children and families.
While leading a mission-critical tech startup focused on public service and child welfare, what frameworks or mental models do you rely on to ensure that your tech leaders, CTOs, product heads, engineers, remain anchored to the organization’s core mission? How do you prevent mission drift and keep AI and product decisions aligned with social impact goals?
Curcuru:
Our philosophy is to use AI for good, and we think it should be AI-enabled, human-led. In some fields, AI replaces people. In our field, social work is a deeply human craft; AI cannot replace the decisions social workers make. Social workers spend 50% of their time on administrative work, so our focus is on reducing that, allowing them to spend more time on social work.
We have three pillars for AI:
- We don’t use it to make decisions or recommendations; it saves time but does not replace judgment.
- We partner with Anthropic; their models are not trained on our sensitive data.
- AI outputs are drafts that social workers review and confirm.
These guardrails allow us to leverage AI responsibly, build trust with government agencies, and have over 80 agencies partner with our AI product in a short time.

Gabe, building on that, trust is critical when introducing AI into government systems. Given that some stakeholders are naturally skeptical about AI, how do you build trust in your system and its AI tools? What steps are you taking to ensure both security and reliability while introducing advanced AI into traditional government systems?
Kopley:
I’d reiterate Felicia’s principles; they are the most important. On top of that, we set up a framework for experiments that limits potential consequences for our customers. We recruit social workers eager to be at the leading edge, give them early previews, gather feedback, and test before a broader launch.
Binti is already a trusted partner for government digital transformation. By showing how our platform improves data quality and gives social workers modern tools, we can introduce AI gradually and responsibly.
And what about security? Given the sensitive nature of foster care data, how does your team ensure robust protection, particularly when integrating AI partners like Anthropic?
Kopley:
Binti is built with cloud-native, security-forward technologies, partnering with Google Cloud. Our data is the source of truth for children’s personal information, so we have a high moral responsibility. Government standards also help.
Whenever we work with partners like Anthropic, we evaluate their enterprise security rigor, collaborate as thought partners, and ensure safeguards against traditional and emerging AI vulnerabilities.
Integrating advanced AI systems into traditional government infrastructure is no small feat. From your experience, what have been the most significant technical challenges, and how does your team address them to ensure reliable adoption and maximum impact?
Kopley:
The main challenge is the maturity of existing digital transformation. Data can be partially structured, making it harder to extract insights reliably. Our approach is to incrementally make data fully structured wherever possible, which maximizes the value of AI and improves enterprise workflows.
In child welfare, data integrity is mission-critical. Algorithmic bias is a serious concern with sensitive information. How does Binti design and maintain an ethical, reliable data pipeline that AI models can safely leverage?
Kopley:
We maintain an internal framework to specify invariants for expected results, preventing regression, hallucinations, or errors in extracted data. Social workers validate AI outputs to ensure tools improve decision-making without introducing bias. Our interface and product design minimize any unintended consequences from technology.

AI is advancing at a breakneck pace. How do you ensure your teams stay ahead, adapt quickly to new models and tools, and continuously translate that capability into better outcomes for social workers and children?
Kopley:
On the engineering side, we hire curious engineers and create space for exploration. We host AI hackathons and encourage knowledge sharing internally. Engineers test new tools, see how they improve productivity, and bring useful ideas into our products.
Curcuru:
Our AI team constantly compares new models and evaluates improvements. The landscape shifts quickly, but these changes allow us to overcome previous limitations and expand what we can do to support social workers.
Looking ahead, which AI innovations do you see driving the biggest transformation in child welfare, and how will these tools change the daily work of social workers and agency administrators?
Curcuru:
We’re excited about AI helping social workers record meetings and auto-generate paperwork drafts, saving hours each week. Voice tools for coordination and confirmations are also promising.
Kopley:
Data and analytics for agency administrators are transformative. AI lets them ask open-ended questions on real-time KPIs and identify trends to improve agency operations, ultimately benefiting children and families.
Gabe, there’s an ongoing debate on AI governance, should frameworks be localized for specific agencies or standardized globally? In your view, how should government systems strike the right balance?
Kopley:
We need both. Large agencies use many software vendors and LLMs, so governance should be global while allowing specificity within departments. Security is evolving fast, and we can help public sector partners adopt best practices from private enterprise governance.
For technology leaders striving to create meaningful social impact, what guiding principles would you highlight for building high-performing teams, choosing the right partners, and maintaining unwavering focus on mission-driven outcomes?
Kopley:
Don’t hesitate to partner with mission-aligned companies like Anthropic. Hire teams who care deeply about the outcome, empower them, and give them space to do great work together.
Curcuru:
Identify the problems you care about most. Mission-driven passion helps you persevere through the challenges of a startup. Many on our team have personal connections to child welfare; others have cultivated a deep care for the mission. Screening for passion is key.
Want to know how AI and technology are shaping industries today? Explore here.