Ethical AI Isn’t Optional: How to Use AI Responsibly, Respectfully and Sustainably

Imagine watching yourself on screen - only it’s not you.

screenshot of Kylie's interview, with her video AI generated looking depressed + text "Ethical use of AI starts with consent, respect, and truth."
ID: Laptop screen showing Kylie Mowbray-Allen wearing glasses and touching her temple during an online interview, with the caption “It’s All About Connection: Kylie Mowbray Allen on Humanizing AI and Marketing.” Text beside the image reads: “AI can copy your image… but not your integrity. Ethical use of AI starts with consent, respect, and truth.”

No words. Just five fabricated seconds.
But somehow, it’s your face, your gestures, your body language - carefully generated by AI.

That happened to me. 👀

After being interviewed for over an hour for a podcast about my background in marketing, business, AI, and resilience, I was sent the final video months later - already live on YouTube. The interview itself had been lovely: warm, positive, and real. But what I saw in the opening moments left me horrified, disappointed, and pretty cross!

The introduction showed a fake version of me, created by AI. During the interview, I’d spoken about my husband’s accident - a life-changing spinal cord injury that left him a C4/5 complete quadriplegic. It’s part of our family story, one we’ve always shared openly. People often tell us how inspiring he is, and I always frame it that way - not as tragedy, but resilience, because the real tragedy would’ve been losing him! (you can read more about that in this blog post). 

So you can imagine my shock when I saw myself portrayed as deeply distressed and traumatised. The AI-generated clip made it look like I’d paused mid-interview, tearful and struggling to speak - as though I was processing trauma. But that emotion never happened during the interview. It wasn’t real. I hadn’t moved that way, paused that way, or felt that way.

ID: via GIPHY We turned a couple of seconds of the AI video into a GIF, but you can view the whole video for yourself below because yes, alas, it's still up! Ironically it’s titled, "It’s All About Connection: Kylie Mowbray Allen on Humanizing AI and Marketing

I’d shared our story the way I always do - pretty matter-of-fact, positive, grounded and yet those five fake seconds (for me) changed everything that followed.

The episode was meant to be about business. But now, it opened with a false emotional narrative - turning a story of strength into one of pity. And that wasn’t my truth.

I hadn’t consented. I hadn’t approved. I hadn’t even been told it was happening. It hurt my husband, angered my daughter, and stirred up a whole lot that didn’t need to be.

When I reached out to the host, I thought it’d be a simple fix: remove the clip, keep the rest. Instead, I was told it “wasn’t justified” to edit it - that it was only “five seconds of B-roll.” They took it down. Only to later put it back up again. The communication back and forth, and the final response, hurt far more than the video itself.

Because it’s not about five seconds - it’s about the principle of the thing. When someone creates an AI version of you, it’s not just pixels - it’s your likeness, your energy, your story and so dismissing that as they did as “just B-roll” completely misses the point.

Accountability was deflected, and the responsibility blurred. And that’s what happens when ethics don’t lead the process, but when efficiency, automation, or cost take priority over consent and integrity.

When Richie saw the video, he went pretty quiet ... “Is that how people see our situation, as causing you such trauma that they'd make an AI video of you, suffering, unable to cope with talking about it?” Clover, on the other hand, was livid… Both of them knew straight away: this wasn't good!

It wasn’t me. It wasn’t my vibe. But it was my face. AI had twisted a truthful story into something that felt deeply wrong because it stole the tone and the truth.


ID: Two laptop screens side by side showing Kylie Mowbray-Allen. The left laptop is labelled “Real me,” showing Kylie smiling naturally. The right laptop is labelled “Not real me (but better hair and slimmer face… oh, and a much bigger office!)” showing an AI-generated version of her touching her hair. A surprised face emoji sits between the two.

And if this can happen to me, someone who literally teaches people how to show up authentically online, imagine what’s happening to people who don’t even realise it’s happening at all.

For me, this wasn’t just a dodgy edit - it was a bloomin' big wake-up call. Somewhere between “AI magic” and “time-saving tech,” we’ve lost a bit of common sense, and a lot of consent, and it can cause hurt.

When truth gets blurry

I kept thinking: what happens when truth gets blurry? When five deepfake seconds can undo years of trust and storytelling. And we all know, this isn’t just a “me” problem it’s where we’re all heading.

AI’s everywhere - in our feeds, inboxes, websites, on the other end of the dreaded telesales calls, even those “helpful” chatbots that so often make everything harder. This morning, a client and I spent an hour trying to fix multiple fraud charges she'd had via an app on her website. No humans were available, just loops of chatbots, automated emails, frustration and dead ends. She finally cancelled the app, and I added it to my growing list of “do not touch this” warnings for clients.

And Meta? OMGIDDYAUNT😫. Don’t get me started! Two weeks ago, my Hello Media page was banned for “cybersecurity” (yep, for posting a Meta AI update 🤨). The appeal was handled by bots too - instant rejection, permanent ban. Yet that very same post was also posted in my Facebook group, and is still there! Because it’s not cybersecurity 🤷🤦.

Every week I hear from business owners with the same story, desperate for help because they can’t get any from Meta: flagged words, frozen ad or Instagram accounts or business pages, and zero humans in sight to get any support. Infuriating, and so wrong.

When automation replaces accountability, things fall apart fast

Remember, AI doesn’t have morals. It doesn’t “mean well.” It’s a parrot with perfect recall and zero conscience and it’ll happily remix your face, your voice, or your life story - all without a second thought (because, well, it doesn’t have one).

And yet here we are, cheering it on like it’s the new office intern: “Look how fast it is!” Meanwhile, so many are letting it loose without supervision, training, or a job description.

This reminded me - painfully - to treat AI like the wildly talented but slightly unhinged assistant it is. Helpful, yes. But it needs boundaries, guidance, and a firm NOPE when it crosses the line.

Collage graphic of Kylie's AI photos
ID: A playful collage of real and AI-generated images of Kylie Mowbray-Allen - cartoons, avatars, deepfakes, and photos - with the text “Only one of these is real me!” showing how easily AI can blur what’s real online. (... Because I do enjoy a bit of silliness, and it's a great way to practise prompting. Spot the only one that's real 🤣.)

What Ethical AI Really Means

Ethical AI isn’t about compliance; it’s about care, and about using technology in a way that respects people, creativity, and the planet.

It’s asking questions like:

  • Who could this harm - even unintentionally?
  • Where did this data or imagery come from?
  • What’s the cost - in energy, emotion, and trust?

Because every “just testing this out” generation burns energy, water, and time. AI doesn’t float in a cloud - it runs on power-hungry data centres that are humming day and night.

graphic that says, "If we care about sustainability,  we" need to include digital sustainability too
ID: Quote graphic with the words: “If we care about sustainability, we need to include digital sustainability too.” - Kylie Mowbray-Allen.

When “innovation” forgets the human

It’s easy to think ethical AI is something only the big companies need to worry about ... you know; the tech giants, the worldwide agencies, the Deloitte‑level blunders (yikes, that's been a debacle, more on that on my post here). But ethics doesn’t scale down, and the second you hit generate, you’ve stepped into the same moral arena as everyone else.

I learned that first‑hand, but in my case, there was no malice, just mindlessness. It was seen as “a creative enhancement.” The real person (ME!) behind the pixels was forgotten. And that's the bit that hurts!

AI gives us endless options: edit faster, speak louder, post more often. But when convenience trumps consciousness, we're trading away integrity one click at a time.

The Cost We Don’t See (and How to Do Better)

AI might make life easier, but it’s not free for the planet. 🌏

Those dozens of image generations or “quick rewrites” might seem harmless and actually pretty fun, but they have a real environmental footprint.

Some large AI systems use enough energy and water to power small towns.

Every time we prompt “please just try one more version,” or “please rewrite without any em-dashes”😝 we’re using energy that could literally fill hundreds of water bottles.

So how do we create smarter, not harder, and lighter on the planet?

💡 Batch your brilliance.

Instead of prompting over and over (“just one more tweak!”), teach your AI once, properly.

Load your Hello Business Brain into your own custom GPT or AI Assistant before you start creating. (I teach this in my AI Assistant workshop and Masterclass follow-up, if you’re not sure how!)

That way, when you’re building a blog, caption, or email, your AI already knows your tone, offers, audience, and vibe.

You’ll cut back on endless regenerations, save time, and massively reduce your digital waste.

Think of it this way: every time you start from scratch instead of reusing and refining, it’s like buying a new plastic water bottle instead of refilling your own.

Build once, reuse often, because that’s sustainable prompting.

♻️ Repurpose before you regenerate.

Got content that already performed well? Refresh it with new context, don’t rebuild it from zero.

Repurpose is the new recycle in AI terms - less waste, more results. 

We can’t fix the planet with one prompt, but we can stop pretending our digital creations are impact-free. Be curious, be conscious, and make your next click count.ID: Quote, "We can’t fix the planet with one prompt, but we can stop pretending our digital creations are impact-free. Be curious, be conscious, and make your next click count". - Kylie Mowbray-Allen


🌏 Choose lighter tools (and smarter ways to use them).

The same task can use 10x more energy depending on where and how you run it.

Browser-based AI (like ChatGPT online) generally consumes less than app-based or constantly-synced desktop versions.

Even better? Build your own custom AI assistant that actually knows your business - it’ll get things right faster, so you prompt less and waste less.

Think of it like fuel economy:

  • 🚗 prompting in a random chat = city driving in traffic
  • 🚀 using a trained assistant = smooth highway run

Shorter sessions. Fewer re-writes. Lighter footprint.

How to Use AI Better (Without Losing Your Authentic YOUness)

Our brains love shortcuts. That’s why AI feels so good; instant ideas, instant answers, BUT, when we stop questioning, we lose control of the narrative.

Here’s how to stay human, ethical, and sustainable while using AI in your business 👇

So here’s the thing… because AI isn’t going away, but how we use it is still in our hands. We can choose to use it ethically, creatively and consciously, or let it run amok and clean up the mess later. 

Ethical AI Starts with us. Typewriter with lightbulb that also has a heart share and AI robot giving a light bulb to a reaching hand.
ID: A vintage typewriter with a sheet of paper reading ‘Ethical AI starts with us’ sits on the left. On the right, a laptop shows a small friendly robot handing a glowing lightbulb to a human hand. Across the top are the words: ‘Respect • Responsibility • Realness.’ The image represents collaboration between humans and AI with ethical awareness.

AI can amplify brilliance or magnify mistakes. It depends on whether we lead it, or let it lead us. And I’ll say this till the bots stop listening: technology without ethics is just chaos with better lighting.

Before you prompt, post or publish, ask:

  • Does this reflect my values?
  • Would I be proud to put my name to it?
  • Is this helping or harming trust in me, my business, or the people I serve?

✅ The Ethical + Sustainable AI Self-Check

Here’s what I’ve learned - and what I now teach in every workshop, coaching call, and conversation about AI: ethics isn’t optional.
If you’re using AI in your business, you’re shaping the digital world we all have to live in. So, let’s make it one we’re proud of.

1️⃣ Ask for consent - always.
If you use someone’s face, voice, or story, get their permission. No grey area.

2️⃣ Be transparent.
If AI helped, say so. Your audience won’t think less of you - they’ll trust you more.

3️⃣ Fact-check everything.
AI sounds confident, but it doesn’t always know what it’s talking about.

4️⃣ Repurpose before you regenerate.
Less prompting = less energy = less waste. (That's why I've created some epic CustomGPT's, they streamline your content writing and make your outputs way better, way quicker, and way more awesome!)

5️⃣ Stay human.
If it doesn’t sound like you, rewrite it. Your tone is your fingerprint.

6️⃣ Be proud of your process.
If you’d cringe explaining how it was made, that’s your red flag.

7️⃣ Lead with empathy.
If it could hurt, humiliate, or misrepresent someone, just don’t publish it.

The Bigger Picture

AI isn’t the villain - it’s actually a mirror, as it reflects the values we feed it.

When we use it consciously, it can amplify creativity, accessibility, and human connection, but if we use it carelessly, it can distort the truth, drain energy, and destroy trust.

We don’t need to fear AI - we just need to stay awake while we use it.

Ethical AI isn’t about perfection. It’s about presence.

Let’s create content that sounds like us, honours others, and respects the planet that powers it all. 🌻


Let’s have the real conversation about AI ethics - what’s your honest reaction after watching? 👇

Back to blog