Following Your Moral Compass: How AirOps Makes Ethical AI Decisions Without a Rulebook.
In 2024, every company using AI is operating without a map. There are no rules, only consequences.
No regulatory body has issued definitive guidelines. No industry consortium has created a universal code of ethics. Every leader is figuring it out as they go, hoping they don't learn their lessons the hard way.
Jessica Rosenberg, Head of Brand at AirOps, navigates this ambiguity daily. AirOps builds AI tools for other businesses. If anyone should have this figured out, it's them. But even they're making decisions without a safety net.
"There are no default standards for AI policies or governance right now," says Jessica. "Every leader has to ask themselves how comfortable they are with using AI in relation to their business."
For most companies, getting AI ethics wrong is embarrassing. For AirOps, it could be existential. Their entire brand is built on helping others use AI responsibly.
The Three-Question Framework
Without clear rules, AirOps built their approach around three core questions. Every time they consider using AI in a new way, they run it through this filter.
Question 1: Does this Tool's Training Data Align with Our Values?
Some AI tools have been trained on copyrighted material without permission. Others have scraped data from sources that violate privacy. Some are transparent about their training data, others are opaque.
AirOps researches every tool's training data before adopting it. If the tool maker won't disclose their sources, that's an immediate red flag. The guiding principle: if our customers knew how this tool was trained, would they lose trust in us?
Real decision: They rejected an AI image generator with known training data issues, even though it produced better outputs than alternatives. The risk to brand integrity wasn't worth the marginal quality improvement.
Question 2: Will our Audience Feel Deceived if They Find Out AI was Used?
There's a spectrum. Using AI for ideation or internal research feels low-risk and probably doesn't need disclosure. Using AI to generate draft content that a human heavily edits falls somewhere in the middle. Using AI to create customer-facing content with minimal human involvement? That's high-risk territory where disclosure becomes essential.
"Some studies show that disclosure can erode trust, while others claim it's essential to build it," says Jessica. "You have to reach a decision that makes sense for your brand."
AirOps developed a practical rule: for internal tools and processes, no disclosure needed. For content creation, it depends on context and how much the output was edited. For anything that could mislead customers, they always disclose.
When they used AI to help generate blog outlines, they didn't disclose it because a human wrote the actual content. When they experimented with AI-generated social media images, they added "AI-generated" labels because the visual was the primary content.
Question 3: Could this Application Backfire if it Became Public?
The "headline test" is brutally effective. Imagine a headline in a major publication: "[Your Company] Uses AI to [Whatever You're Planning]." If it makes you uncomfortable, that discomfort is data.
If they'd be proud to explain their AI use publicly, they proceed. If they'd be defensive or embarrassed, that's a signal to pause.
Example: They considered using AI to generate customer testimonial summaries, aggregating real feedback into concise quotes. Technically, these would be based on authentic customer sentiment. But imagine the headline: "AI Company Creates Fake Customer Testimonials." The optics were terrible. They didn't do it.
Operationalizing Ethical AI
At AirOps, AI ethics isn't a policy document gathering dust. It's an ongoing conversation woven into how they work.
When someone suggests using AI for a new application, they discuss the three questions together. They specifically seek out skeptical perspectives because the goal isn't consensus, it's to surface risks they might not have considered.
Then they decide: yes, no, or yes with conditions. Crucially, they document it. This creates a reference library so future decisions don't start from scratch. Because AI technology evolves fast, they revisit past decisions quarterly.
What a Year of Ethical Decisions Taught Them
Your customers' expectations matter more than industry norms. What's acceptable for a B2C e-commerce brand might be completely different for a B2B enterprise software company. "It's an individual choice from company to company," says Jessica. Know your audience's values, not just your competitors' practices.
Transparency builds trust even when you fail. AirOps experimented with an AI workflow that didn't work well. Instead of quietly abandoning it, they blogged about what went wrong. Customer response: appreciation for honesty. Several customers said it made them trust AirOps more, not less.
Being cautious doesn't mean being slow. They moved fast on low-risk AI applications like internal tools while being more deliberate on high-risk ones like customer-facing content. "We're approaching it cautiously—it can undo a lot of work if you get it wrong, but there's also potential to do really exciting things," says Jessica.
The conversation itself is valuable. Even when decisions are obvious, the process of asking the three questions consistently surfaces considerations they wouldn't have thought of otherwise. The framework isn't just about avoiding mistakes, it's about making smarter decisions.
Building Your Own Framework
Start by defining your non-negotiables. What values are central to your brand? For AirOps, it's transparency and responsible AI use. For McAfee, it's trustworthiness and authenticity. List three to five core values. These become your ethical north star.
Identify your high-risk scenarios. Where could AI use damage your brand? Financial services companies worry about misleading customers about investments. Healthcare organizations face risks with medical advice. Media companies must prevent misinformation. Know your specific vulnerabilities.
Create your decision framework. Adapt AirOps' three questions or create your own, but make them simple enough that anyone on your team can use them. Does this align with our values? Would our customers be okay with this? Would we be comfortable defending this publicly?
Document your decisions. Keep a log of AI use cases: what you considered, what you decided, why you decided it, and any conditions. This becomes institutional knowledge as your team grows.
Revisit regularly. Schedule quarterly reviews: has anything changed that would make us reconsider past decisions? Your ethical framework should evolve with the technology.
The Unexpected Competitive Advantage
AirOps initially saw ethical AI as risk management. It became something more valuable: a competitive advantage.
It builds customer trust in a market flooded with AI tools of questionable quality. Being thoughtful about ethics makes you stand out. It attracts top talent because designers, engineers, and marketers want to work for companies that take ethics seriously. It forces strategic thinking that leads to better solutions, "If we can't use AI for X because of ethics, what's a creative alternative?" often produces better ideas than the original. And it prepares you for regulation before regulations arrive.
The Choice is Yours
"The bright side? It's up to you," says Jessica. "Look to your brand, your industry, and to your own teams to find out what your ethics are and how AI fits in."
You don't need to wait for standards or regulations. You can decide right now what kind of company you want to be, how you want to use AI to strengthen trust rather than erode it, and what guardrails will keep you aligned with your values.
The lack of rules isn't a problem. It's an opportunity to define your brand through your choices.
Bottom Line
In the absence of universal AI ethics standards, your moral compass is your competitive advantage. Build a simple decision framework, use it consistently, and be willing to explain your choices publicly.
The companies that thrive won't be the ones that use AI fastest or most aggressively.
They'll be the ones that use it thoughtfully, in ways that build customer trust.
