Podcast: Play in new window | Download
Subscribe: Spotify | RSS | More
Jonathan Gillham talks with Jason Barnard about publishers embrace integrity in the era of generative AI.
Jonathan Gillham is the Founder and CEO of Originality.ai that provides a complete toolset that helps Website Owners, Content Marketers, Writers and Publishers hit publish with integrity in the world of Generative AI.
Jonathan reveals the hidden risks and ethical challenges of AI content creation. He emphasizes the importance of keeping the human in the loop to mitigate risks and maintain authenticity. He discusses the necessity of unique data and insights in content to add value beyond simple text. He also highlights the challenges of setting clear policies within organizations regarding the use of AI.
From dangerous mushroom-picking guides to content that puts brands at risk, learn why maintaining the human element is crucial for business success. You will get insider insights on balancing AI efficiency with authenticity, implementing smart content policies, and avoiding Google’s AI spam penalties. Plus, you will uncover practical strategies for creating value beyond words in today’s AI-driven content landscape.
What you’ll learn from Jonathan Gillham
- 00:00 Jonathan Gillham and Jason Barnard
- 02:45 What Exactly Does Originality.ai Do?
- 03:28 What Specific Words Are Commonly Introduced by ChatGPT and Other AI?
- 03:57 Why Does Jonathan Gillham Think Humans Are Starting to Write Like Machines?
- 04:54 Why Does Jonathan Gillham Think AI’s Ability to Imitate Humans Can Be an Increasingly Big Problem?
- 05:49 What Are the Main Problems With AI-Generated Content?
- 06:10 What Are the Two Critical Additional Problems With AI-Generated Content Based on Jonathan Gillham?
- 09:04 Why is Fact-Checking Such a Huge Problem for Large Language Models (LLMs)?
- 10:56 Why is It So Important to Keep Humans Involved in the Process of AI Content Creation?
- 12:20 What Does It Mean to Go Beyond Words in Content Marketing?
- 13:49 Why is Simply Feeding an AI With Your Content Not Enough for It to Fully Replicate Your Knowledge?
- 16:06 What are the Effective Ways to Use Prompts to Give the Bot the Instructions for Content Creation?
- 17:14 How Can You Distinguish Between Human Writing and Bot-Generated Content?
- 19:40 How Can A Business Person Ensure Policies for the Use of AI Are Applied Across Their Organization?
This episode was recorded live on video November 26th 2024
Links to pieces of content relevant to this topic:
https://originality.ai/
Jonathan Gillham
Transcript from Publishers Embrace Integrity in the Era of Generative AI – Fastlane Founders with Jonathan Gillham
[00:00:00] Narrator: Fastlane Founders and Legacy with Jason Barnard. Each week, Jason sits down with successful entrepreneurs, CEOs, and executives, and get them to share how they mastered the delicate balance between rapid growth and enduring success in the business world. How can we quickly build a profitable business that stands the test of time and becomes our legacy? A legacy we’re proud of. Fastlane Founders and Legacy with Jason Barnard.
[00:00:31] Jason Barnard: Hi, everybody and welcome Fastlane Founders and Legacy. I’m here with Jonathan Gillham. A quick hello and we’re good to go. Welcome to the show, Jonathan Gillham.
[00:00:46] Jonathan Gillham: Hey, thanks, Jason. Thanks for having me.
[00:00:48] Jason Barnard: An absolute delight. You’re the founder of Originality.ai and we’re going to be talking about AI and ethics and not using AI to produce all of your content that you need the human aspect and you need to make sure that human aspect is maintained over time for your corporation. It’s very tempting to try to save time. But before we do that, our specialty at Kalicube is Brand SERPs. And I was looking around at your name and Google has you in its Knowledge Graph. It understands who you are and you’ve got what we call a tiny Knowledge Panel sprout for people listening rather than watching on the video. We’re now looking at a tiny Knowledge Panel sprout with Jon’s name and his photo. And that is a great start to understanding from Google and a great start to getting it to represent you this way. And we’re going to be talking about AI number one. Google on the left hand side, where Google is representing Scott Duffy as the superstar he is as an entrepreneur. And on the right hand side, ChatGPT being able to explain exactly who he is, what he’s done and that he’s worked with Richard Branson in the past, for example.
So educating AI is what we do. What exactly do you do at Originality.ai? Please explain, Jonathan.
[00:01:58] Jonathan Gillham: Yeah, sure. So I’ll give a kind of quick background to make it all sort of make sense. But we ran a content marketing agency for a number of years, ended up selling that was the heaviest user of Jasper AI, but predated ChatGPT where we were transparently using AI to create content for clients and passing on that efficiency savings. The question that sort of started to come up was how do we know your writers aren’t using AI? It’s like, well, we have a policy but we didn’t really have the right sort of mitigating steps and controls in place. And so we sort of saw this wave of generative AI coming, ended up building an AI detection tool, launched it actually the weekend before ChatGPT launched. So a bit of a, bit of a lucky unlucky on the timing of some different respects and yeah, it’s been a ride, a ride since then. But what we do is we help, we help anyone that’s acting as a copy editor ensure that the content that they’re going to be publishing meets their standards and whether that’s published by AI or not, plagiarism checking, fact checking, readability, grammar, spelling.
[00:03:00] Jason Barnard: And so it’s a way for the content writers or the bosses of the content writers to check for originality.
[00:03:09] Jonathan Gillham: Yeah, the bosses of the content writers often or the copy editors who sort of are generally functioning as the bosses of that team of writers. But you know, I think a lot of people are happy to pay $100 thousand dollars for a piece of content, not super happy to find out it was copied and pasted out of ChatGPT.
[00:03:28] Jason Barnard: Right. Well, what I generally do is look, look for words like elevate and a new frontier. And these are all words that ChatGPT and other AI have introduced into the language as this kind of common term that people supposedly use. And for me, we don’t use frontier, new frontier very often. We don’t use elevate. And weirdly, people are now starting to use it. Are we starting to write like the machines? And it’s going to get harder to detect.
[00:03:57] Jonathan Gillham: So it’s actually interesting. You know, as humans we have two biases, cognitive biases that make us think that we can sort of identify content where we have this sort of overconfidence bias, where if you ask a room filled with people how many of you think you’re an above average driver, 80% are going to put up their hands. And then we often try and sort of think we can see patterns where no patterns exist. And so there are definitely some words that are being used by ChatGPT at a higher rate than normal. And as that sort of gets into the world of literature that people are reading and consuming, it’s going to drive us to use those words more. But the ability of humans to actually pick out AI content, especially if there’s been any level of sort of attempt to write like so and so. Our ability gets down to basically a flip of a coin.
[00:04:44] Jason Barnard: Right. And do you think that’s going to be an increasingly big problem, that is AI is going to get better and better at imitating real people?
[00:04:55] Jonathan Gillham: Yeah, I think it’s going to be an interesting problem that society as a whole is going to have to wrestle with is, where are we okay with AI generated content? Where are we not okay with it? I think there’s great use cases for it. There’s also use cases that we’re not very happy about. If we read a review for baby formula that was AI generated. We’re not very happy about reading that versus a human generated review. Similarly, we’ve helped with a couple oddly. There’s been a couple of mushroom picking books that were AI generated and then published on Amazon and the books had some dangerous material in it that would have resulted in death if somebody had followed what the book had suggested. And those turned out to be AI generated books. And so I think that it’s a risk to brands that are using it for them to understand. It’s not necessarily a bad thing or a good thing. It’s just a risk that needs to be managed and correctly mitigated.
[00:05:47] Jason Barnard: Right. So just off the top of my head I can see multiple problems with AI generated content. One is factual correctness. They don’t fact check that hallucinations are common. Another is style, another is expertise adding something new to human existence. Those are just three. I’m sure there are a lot more.
[00:06:10] Jonathan Gillham: Yeah, the sort of problems that it introduces to a business is one, the fairness component on if you’re okay with somebody using AI and you’re paying the writer on a freelance basis, then who should be the one that generates captures the value of that efficiency lift. And so that’s sort of one. And the second one is Google has come out and been extremely clear that they are very much against mass published AI spam. And then it’s up to our interpretation on where does that mass published AI spam start and stop and sort of when does it turn into value added useful content and when does it turn into spam? And that’s something that although, you know, we’d love it if it was a nice clean answer from Google, the reality is that Google will do things trying to sort of deal with one problem and then we’ll have some sort of ripple effects elsewhere. So Google punishment and fairness of the two of the other key risks with using AI content.
[00:07:12] Jason Barnard: Right. Well, I mean Google punishment aside, there’s also human punishment in that I land on a page written by AI and once again I’m overestimating my ability to recognize it. But I find that I fall asleep, I stop focusing, I stop concentrating when it’s written by AI and my guess is because it’s simply predicting the most, the next most probable obvious word. So it’s writing in a very obvious manner, at which point my brain switches off. Am I dreaming this?
[00:07:44] Jonathan Gillham: I think poorly used AI, 100%. I think that you’re not dreaming that. I think that it has texts that are recognizable. I think AI where it is like, hey, here’s my ideas using this style. Turn my dumb ideas into something that sounds a little bit more intelligent and I think then it can produce content that is more in line with being a little bit more surprising. So like, personally I’m an engineer, I’d rather communicate in spreadsheet format. Words don’t come to me as naturally as they do for writers and I can often, I often use it myself to sort of again dump in my sort of like half baked dumb ideas and then it like lays them out in a structure and wording that makes way more sense than what I would have produced.
So I think certainly you’re not wrong on poorly produced sort of straight prompt, write me content about X. You’re going to get a derivative of work of everything that already exists.
[00:08:42] Jason Barnard: Right, okay. And in terms of accuracy, I mean fact checking and getting it right and not writing about poisonous mushrooms and suggesting that we might want to eat them. Fact checking is a huge, huge problem. Is that a problem that LLMs, ChatGPT, Perplexity and Google Gemini and so on and so forth are going to be able to solve in the short term?
[00:09:03] Jonathan Gillham: I don’t think so. So I think LLMs work by being a little bit, you know, we’ll call it unbalanced. But the fact that there, there’s some sort of, like, there’s some surprise, a little bit of spontaneity to what they produce, like if we went back to like AI writing free GPT3, it was, it was just grotesquely terrible. And it’s been this sort of, this like unbalancedness, this creativeness that has resulted in it being as good as it is. Fact introducing errors is part of that. So I think it’s going to continue to get better. But the use of LLMs and to achieve a sort of near perfect output is never going to work. And what’s made it additionally challenging for editors is that historically if you saw a poorly written piece of content, you might think, oh, I really need to edit, I really need to check this for factual accuracy. This looks like a piece of content that hasn’t had a level of rigor applied to it, whereas now it’s grammatically perfect content that has the factual errors in it and it produces a new set of challenges.
I think LLMs and we built a tool that’s a fact checking aid. But they can help in that fact checking process. But they’ll never going to be in a position to say with certainty this is true or not true.
[00:10:16] Jason Barnard: Yeah, I mean humans can’t say this is true or not true. And truth and fact, what is that thing? Truth is my interpretation of the fact.
[00:10:29] Jonathan Gillham: That’s what I’m saying.
[00:10:29] Jason Barnard: So we’re in a situation where, I mean as human beings, truth and fact aren’t actually very clear. LLMs even less so. Keeping the human in the loop phenomenally important. I mean I hear that a lot. Andrea Volpini from Wordlift, a friend of mine, talks about it a lot. Tell me about keeping the human in the loop. How can we do that sensibly whilst also keeping the gains in terms of efficiency?
[00:10:56] Jonathan Gillham: Yeah. So I think I’ll say two things. One, I mean that’s kind of what our tool is trying to do is to empower an editor to make the choices that they want to make in terms of the quality of the content that they’re going to publish. Do they want to AI generate or not? Plagiarism, fact checking, grammar, spelling, readability, etc. I think the second piece around that I would say to that is, you know, how do you produce content that in a world where the production of words is so prolific that it’s sort of at near zero cost. You need to bring something else to your content to make it worthwhile. If it’s just your words, you better have some really clever words and some really clever insights to be communicated. But I think they sort of, the more efficient path to sort of winning in the world of content marketing right now is to be producing, thinking beyond words and producing whether it be sort of like first person research, whether it be a tool, whether it be data synthesis around whatever topic you’re talking about. But I think those, you know, human in the loop 100% agree with human in the loop in all marketing material.
And then how do you go beyond words is I think something that people that are winning in this game of content marketing right now are thinking about and executing well on.
[00:12:16] Jason Barnard: Right. Can you expand on going beyond words?
[00:12:19] Jonathan Gillham: Yeah. We’ll use our case at Originality. But we’ll have, we have a bunch of sort of free AI tools with content beneath it. But those free AI tools are adding value to the user beyond words will run some unique studies that will look at the amount of AI in Google. And that study will then be unique data that no LLM has. We have a bunch of content beneath it. And I’d say whether that content was written by AI or human, the value of that page that people are getting is from the dashboard that they’re seeing. And so those are the, there’s a couple of the examples of sort of going, going beyond words and sort of the words are there. But how can we, how can we go beyond just the words and add additional value with unique research, calculator, insight, et cetera.
[00:13:11] Jason Barnard: Unique data in particular is going to help in terms of creating that content. I actually a year and a bit ago asked ChatGPT, how do you build a Knowledge Panel on Google? And it answered very, very badly. It told me lots of things to do that I know don’t work. So I then set up in Kalicube Kalibot and fed it with all my articles and all the things that I’ve said about how to build Knowledge Panels and now it answers quite sensibly, is that going to be enough? I just feed it with everything I’ve said or written and now it knows everything I know and I can ask it to write all my articles for me.
[00:13:49] Jonathan Gillham: So I think it’s better, but I don’t think it’s enough. Now I think, I think it’s a, I think again better than just saying hey, go do X. But I still think it’s, it’s. Google is facing an existential threat right now in terms of their search results being overrun with nothing but AI generated content. And I think call it honest, well intentioned actors that are using AI are going to also get wrapped up in updates that are targeting the bad actors which are certainly out there that are attempting to spam Google with 10,000 articles a day on a site focusing on different topics that are hopefully easy to rank for. And so I think is that sort of like the Kalibot enough? I think it’s significantly better. I think it’s a great writing aid, but sort of turning it loose without that human in a loop would be, I think would not be enough.
[00:14:47] Jason Barnard: Right, well I’ll give you, I agree 100% and I’ll give you a really good example is I wrote an article and then asked Kalibot to rewrite it. And I gave the two copies to the journalist and said which one do you choose? And the journalist said without a doubt that one which was the one I had written grammatically probably slightly less good, but it had soul. And I’m desperately hoping that Kalicube’s Kalibot will never have my soul, but it’ll never be able to imitate my soul.
[00:15:23] Jonathan Gillham: Yep. Yeah, no, that’s interesting. I would, I think that might be a reflection on the quality of your writing. I would guess that if I did that same experiment, they would be picking the. They’d be picking the GPT you bought written one.
[00:15:37] Jason Barnard: Yeah, well, I think as well, the way we express ourselves, I mean, for example, what I find with bots is they tend to use the passive voice a lot rather than the active voice. So even if I tell it to use that, or if I tell it to use the active voice, it goes completely overboard. So to what extent can I use prompts to give the bot the instructions and the. Sorry, the rules of engagement in order to create the content? I can feed it extra content, I can give it a prompt.
[00:16:06] Jonathan Gillham: The most useful thing is to have it write in a similar style to use that it’s not. So that’s certainly what I’ve found to be the most helpful, is to have one that is, that is like a GPT bot that is so like small amount of data, but loaded up to sort of write in a similar style to me. So it’s not super wordy, it’s lots of bullet points. And then that way when I, when I ask it to like, you know, do X, it’s not doesn’t require additional like, hey, here’s some dumb thoughts, turn this into an email. It’s not a lot of additional work.
So I think that’s probably the lowest hanging fruit that most people have is to have sort of a GPT bot that is, that writes similar to their style, so that when you get it to be a writing assistant, there isn’t a significant additional load of effort of sort of removing those ticks that are odd.
[00:16:54] Jason Barnard: Right. And in terms of authorship, you have the bot. If I ask the bot to write something for me and then I publish it as though it was me. Is it possible for you to be able to tell that it’s not me because there’s a specific style to my writing that you can identify?
[00:17:11] Jonathan Gillham: So there’s a specific style to AI writing that is identifiable with a certain level of accuracy. And so whether that, even if you ask it to write like, you know, the Kalibot, if you write it to look like so and so. The accuracy of AI detection is sort of the 99% accurate for ChatGPT content and then it has like a 1 to 3% false positive rate. So it correctly identifies a content 99% of the time. It incorrectly identifies human content 1 to 3% of the time. So highly accurate but not perfect.
[00:17:51] Jason Barnard: Right. Okay. I mean, I think a year and a half ago everybody was thinking we’ll be able to generate all the content, we won’t have to work anymore. And then I read an article the other day and it was in the Guardian, which isn’t a specifically geeky place to be reading articles. And somebody was saying it’s actually create more work for my team because they generate the article, then they generate another version of the article and then they get confused and then they can’t figure it out and it ends up taking more time to write it than it would have done without the AI. Is that something common that you’re seeing or is that people being very demanding?
[00:18:30] Jonathan Gillham: So I’d say there’s, this is a new world that we’re in in the last couple of years and there’s people doing all kinds of weird stuff in it. And so there’s people that are writing content with AI and then trying to make it not look like it was written by AI to try and then game Google and there’s like, it’s just like. And they’re like introducing errors into the content to try and make it not look like it was written by AI, but then publishing it on the web. And hopefully that because it doesn’t look like AI yet, it’s a worse article, it’ll perform better. And there’s just these like really illogical things that I think are happening. And so I think, I don’t think it needs to be, I don’t think it needs to introduce inefficiency. I think AI introduces wonderful efficiency lifts. I think what introduces inefficiency is a lack of understanding of the policy that is put in place within an organization around what the allowable use of AI is or isn’t.
I think people twist themselves in knots when they don’t understand that.
[00:19:30] Jason Barnard: So to finish up then, from a pragmatic perspective, as a business person, how would I go about setting out the policies and having them applied across my organization?
[00:19:41] Jonathan Gillham: Yeah. So I think decision on it on, you know, again, AI content is a wonderful tool, but it introduces risk to a site. Understanding it where you sit on that sort of risk reward profile and then communicating that to, to your editor and writing team and then having a sort of a written policy on what the appropriate use or not is of AI and then measuring that use with a tool like originality that identifies if AI content was used or wasn’t used.
[00:20:11] Jason Barnard: Okay, brilliant, wonderful. So we all have a lot to think about and as you said it’s a new world. New problems that we didn’t even know existed are now coming up. It’s changing really fast. I’m finding it a lot of fun and at Kalicube for our from our perspective helping people manage how they’re represented by AI and by Google is huge. Hugely interesting and I enjoy it greatly and thank you Jonathan for that insight into the ethics of using AI, how to use it within our company correctly and I’m going to give you the outro song. A quick goodbye to end the show. Thank you, Jonathan.
[00:20:54] Jonathan Gillham: Awesome, thanks, Jason. Been fun.[00:20:56] Narrator: Your corporate and personal brands are what Google and AI say they are. We can give you back control. Kalicube.
The event is 100% free:
Organized by Kalicube.
