
Redefining Ethical AI in Business: Garrett Yamasaki’s Vision for Responsible Innovation
As large language models and automation tools race into boardrooms, Garrett Yamasaki is helping redefine what ethical AI in business looks like. A former engineer at Google and Texas Instruments, Yamasaki is now the CEO of WeLoveDoodles, a fast-growing AI-driven consumer platform that merges machine learning, personalization, and inclusive design.
But his mission goes beyond building smarter tech. Yamasaki is challenging traditional leadership models, placing ethics, equity, and long-term responsibility at the heart of how AI is deployed in business.
In this interview spotlight, we sit down with Yamasaki to explore how AI is transforming tech leadership, why DEI should be embedded in code, and what happens when you put empathy at the core of your systems architecture.
Let’s explore the insights shared by the former Google engineer turned pet-tech CEO on why inclusive design isn’t just good ethics, it’s good business.

Thank you for joining us, Garrett. To start, could you share a bit about your journey? What brought you into the world of tech, and how did that path lead you to the pet industry?
Yamasaki: My career trajectory has been anything but conventional. After a rewarding tenure as a software engineer at Google, where I had the privilege of working on groundbreaking projects, I found myself drawn to a different kind of passion—pets, specifically doodles.
It was this intersection of technology and my personal love for these dogs that led to the creation of We Love Doodles. What began as a small, personal project quickly evolved into a global community and e-commerce platform, uniting doodle lovers from around the world.
For me, it’s been a powerful reminder of how technology, when harnessed thoughtfully, can create meaningful connections and foster communities around shared interests.
Let’s move to the big picture. Over the past few years, AI has gone from backroom R&D to a boardroom priority. As a tech leader, how has your role evolved in that transition, and can you point to a moment where ethical AI in business truly changed the game for you?
Yamasaki: AI has shifted my focus from tactical problem-solving to strategic orchestration. At Google, I coded algorithms; now, I architect ecosystems where AI and humans co-create. For example, during 2023’s supply chain chaos, we deployed a reinforcement learning model to predict port delays, rerouting shipments 14 days faster than human analysts.
The AI’s pivot saved $240K in demurrage fees, but the real win was freeing my team to focus on customer experience innovations. My role? Less about building AI, more about bridging it with ethics and business goals.
So, would you say AI now sits alongside your business and product strategy as a top priority? And if so, how has that changed the way your leadership team operates?
Yamasaki: Absolutely. AI is our business roadmap. We’ve embedded “AI Ambassadors” (rotating engineers and marketers) into every product team to identify use cases.
For instance, our loyalty program now uses LLMs to generate hyper-personalized training tips based on a dog’s breed and owner’s lifestyle. This forced a leadership restructure: our CTO now co-owns P&L with the CMO, and we’ve added a Chief AI Ethics Officer role to governance boards.
That cross-functional model is powerful, and increasingly necessary. Can you talk about how AI has changed your technical org structure, and what measurable outcomes that’s produced?
Yamasaki: We disbanded our monolithic data team into AI Pods: nimble squads of data scientists, UX designers, and ops leads. One pod built a computer vision tool for quality control, slashing defective returns.
Another created a ChatGPT-powered CSR bot that reduced ticket resolution time from 2 hours to 12 minutes. Revenue from AI-driven features now accounts for 28% of our ARR.
Balancing innovation with stability is every CTO’s dilemma. How do you approach the challenge of investing in AI experiments without losing sight of core business functions, especially when the ROI isn’t always immediate?
Yamasaki: We allocate 70% of AI resources to core ops (inventory, CX), 20% to adjacency experiments (like AI-generated product descriptions), and 10% to moonshots (e.g., quantum ML for demand forecasting). To justify the 10%, we run “Pre-Mortem Hackathons” where teams pitch worst-case failure scenarios, forcing rigor before funding.
Scaling AI across an organization is often about shifting mindsets, processes, and power structures. What were the hardest parts of that transformation for you? And how did you overcome them?
Yamasaki: Data tribalism was a big one. Early on, marketing hoarded customer data, fearing IT breaches. We broke silos by creating a “Data Commons” with role-based access and anonymized synthetic datasets for testing.
Upskilling was another hurdle and 40% of our engineers lacked ML ops expertise. Partnering with Coursera for nano-degrees helped cut skill gaps.
Knowing what you know now, if you could sit down with your pre-AI self, say, five years ago, and offer one piece of advice, what would it be?
Yamasaki: Start measuring technical debt in ethics, not just code. That chatbot you’ll build in 2024? Its unintended bias will cost three times more to fix than it would have to build it responsibly from day one.
That’s an interesting point about building responsibly from the start—not just in code, but in culture. Speaking of which, let’s pivot to inclusive design. What initially sparked your company’s focus on DEI, and how did that evolve into something more foundational than just hiring metrics?
Yamasaki: I grew up as a fourth-generation Japanese American, and I’ve seen firsthand how exclusionary norms, like “culture fit” hiring—can limit innovation. Initially, DEI was reactive: hiring bilingual support reps to help non-English-speaking pet parents. But when our all-male engineering team failed to account for accessibility features for disabled dogs, that changed everything, it became strategic. Today, DEI isn’t a program, it’s our product development lens.
That’s a powerful shift, from a DEI program to a design lens. Can you share how inclusive leadership shows up in your day-to-day product work? Any specific policy that really exemplifies this?
Yamasaki: It means designing systems that require equity. For example, our “No Empty Chair” policy mandates that every product roadmap meeting include at least two underrepresented voices. When developing our GPS collar’s senior-dog mode, a neurodivergent engineer proposed vibration alerts for hearing-impaired pets, a feature that now drives 15% of sales. Inclusion isn’t optional; it’s how we innovate.
Inclusion is easy to talk about, harder to operationalize. A lot of organizations make DEI pledges, but fewer hold leadership truly accountable. How do you make sure progress happens beyond just the annual report?
Yamasaki: We tie 20% of executive bonuses to DEI KPIs—things like retention rates for marginalized employees, supplier diversity spend, and belonging scores from quarterly pulse surveys.
Our CTO also leads monthly “Code with Empathy” audits. It’s where teams review algorithms for potential bias. One time, we caught our AI recommending winter gear for short-haired breeds it mistakenly labeled as “outdoor dogs.” These reviews keep us honest.
What about hiring and promotions—any systemic changes you’ve made to support equity?
Yamasaki: We redesigned promotions to reward “equity mentorship.” To advance, managers must actively sponsor two underrepresented employees. And for hiring, we use platforms like GapJumpers for blind skills assessments. Resumes are hidden. We evaluate the work, not the name.
That helped us double our female engineering hires over an 18-month period.
I imagine not everyone bought in at first. Did you face internal pushback?
Yamasaki: Definitely. One tenured leader called DEI a “distraction.” I showed him the data—teams led by diverse engineers fixed bugs 30% faster. Then I paired him with a junior Latina engineer for a hackathon. They built a multilingual chatbot that reduced customer complaints by 40%.
That experience changed his perspective more than any policy memo ever could.
Since embedding DEI into the core of your business, what’s the most meaningful shift you’ve seen in your culture? What signals tell you the culture is actually evolving, not just in what people say, but in how they behave?
Yamasaki: Silence died. People now speak up when something’s off. One employee called out ableist packaging from a vendor. Our LGBTQ+ ERG, “Rainbow Retrievers,” helped launch a Pride harness line that’s now one of our top sellers.
It’s not about checking boxes. It’s about creating an environment where trust leads to product breakthroughs. We’ve learned that building trust leads to real innovation—because trust endures, while tokenism does not.
Last question to bring us full circle: If you could offer just one piece of advice to fellow CTOs who are navigating the fast-moving intersection of AI, ethics, and leadership, what would you tell them?
Yamasaki: Stop benchmarking against tech’s low DEI bar. Measure success by who feels safe to fail and whose ideas scale.
Explore more of the AI in the industry series.