Futureproof.marketing - 4th Industry Marketing

View Original

NEWSLETTER: Gen AI and marketing - the basics (and why Gen AI is not 'Intelligent')

Generative AiIseems to be the only thing in the news for marketers right now.

Given all the rubbish in the media it's incredibly easy to become confused and misunderstand what it means for your brand, company or culture in general. On one side there are stories that tell you the world is about to end / we are going to be taken over by AI.  On the other side, there are marketing focused stories that mainly centre around generative AI text based ChatGPT and generative images telling you that it can do everything and everyone will lose their jobs.  

As always, the truth is somewhere in the middle and I will aim to give a balanced view.

In this newsletter I aim to give an overview of the reality of AI, debunk some of the scary myths, and focus on some of the uses right now. I wont go into lots of detail about how AI is being used, this is a more ‘basic’ overview of the state of play, I will do deeper dives on use cases and the future of travel in a later newsletter.

Hopefully this will create a balanced view about the current uses and future risks of this incredibly powerful tool and leave you feeling a little better informed and less worried. 

 

The newsletter is long so please skip to the bits you are really interested in.

It will go through the following sections. 

1) A frame of reference - my experience in AI  

2) Definitions - there is nothing more confusing and contradictory so hopefully I will clear this up 

3) Narrow AI 

4) General AI: starting wide -  the big fears and why they make no sense- 

5)  The smaller fears and why they're much more scary 

6) Regulation what's actually happening why it matters and don't believe the conspiracies 

7) What we have white right now

 

1) A frame of reference 

 I first discovered machine learning in 2015, ML is what powers AI – I will explain more of this in section two. 

I realised that by absorbing large volumes of conversational data sets (from social media / CRMs / media etc) and by having the machine identify the patterns within it, you could understand what humans really felt and thought about almost any brand / category, and you could understand their psychology. The ramifications for the advancement of marketing research were huge.  

At the same time, I realised you could use this data to power conversational AI which would enable one-on-one conversations between machines and humans, and brands would be able to own these 'Virtual humans' and develop emotional relationships between with their consumers through the Avatars.  

Since then I’ve been on a journey with the goal of harnessing Machine Learning to truly understand how humans think and feel and to create virtual humans that seem to think and feel - as chat bots, Avatars and more recently Co-pilots

As a result I studied Machine learning in an exec programme at MIT and have focussed on how to use the tech in marketing.  You can read an article in 2019 (it reads as quite naive now) about why I thought back then AI and Machine Learning were the most important things for a marketer to study and understand. 

https://futureproof.marketing/strategy-blog/why-marketers-must-study-machine-learning-ai-in-2019 

 

I then came up with a theory about marketing and in what Klaus Schwab calls 'The 4th industrial revolution' - the fusing of the physical, with the digital and virtual. You can read about that here. 

 

https://futureproof.marketing/strategy-blog/2019/1/27/marketing-ai-and-the-4th-industrial-revolution-the-background 

 

This frame of reference, and the understanding of how consumer behaviour and touch points will change due to this technology, drove me to develop AI Audience Intelligence Products like  Data Kinetics, Metaverse worlds, and Virtual Human characters and tech, as I believe the future will see us all interacting with machines as our colleagues, advisors, mentors and friends - both within a company and a consumer out there in our daily life. 

 

2) Definitions. 

A few basics. 

Machine Learning is the pattern matching and recognition algorithms that are used to determine 'actions' by the machine. 

The AI bit, is the bit where the machine does something the human would normally do – turn the steering wheel of the car as there's a dog in the road, generate an mag of an astronaut on a horse, the ML is the system behind it that recognises the dog in the rd by computer vision, or has identified all the attributes that make up an astronaut and a horse and so can generate a new one. 

Many people deeply involved with AI believe it should actually be called applied statistics because if you really look at it all it's doing is analysing past patterns and predicting or replicating those patterns - or mashing them up - the word intelligence makes you think that there is some human like replication when in fact there isn't at all. 

And this brings us to the difference between narrow AI and artificial general intelligence. 

3) Narrow Ai 

Everything you have ever seen to date is narrow AI, this is a machine learning based system that is built to achieve 1 very narrow specific particular goal such as creating an image or paraphrasing a large volume of text. 

some current narrow AI does a good job of seeming to be like a human as you can converse with it. This is where GPT has become so well known .  

However all it's actually doing is predicting the next letter or word that comes after the previous letter or word (Vector tokenisation) and in doing so creates the appearance of intelligence - but it has none. 

4) Wide AI - AGI 

Artificial General Intelligence - where a machine has the ability to reason, judge and behave and act with agency across multiple areas, we are not yet close to. 

As narrow AI is only predicting the next letter or word that comes after the previous letter or word, there is zero chance of it taking over the world, it has no consciousness, it has no agenda, and anyone who tells you it does have consciousness simply does not know what they're talking about 

This is where we now start to need talking about Transformers or the T in ChatGPT 

Previously when we were fiddling around trying to make chat bots I used to have to input thousands of sentences and sentence fragments into spreadsheets to direct the machine to determine what words it should use when responding to questions. This system was incredibly accurate but it took a lot of data and work and we needed to create libraries of data on every subject for the machine to be able to communicate in what seemed like an intelligent way 

Where this changed was by bringing in the transformer component of neural networks - this is effectively where the machine rolls a dice and creates new words or letters after the previous word or letter based on weighting and based on a random dice roll output. 

By doing this the machine can talk about almost anything. The answers it gives however are not going to be accurate - and this is why the machine seems to hallucinate and come up with answers which don't make sense. Although we are working on ways around this.

And now we get into the true danger of AI 

5) Why the smaller fears are more scary 

Bye now hopefully you will realise that a narrow AI has no chance of taking over the world 

What these systems are brilliant at however is impersonating humans - they can do this now with imagery, voice, text and video. 

Where this becomes dangerous is when immoral creators / companies / governments profit seek by creating multiple fake characters / storylines / conversations that generate conspiratorial conversations which social media algorithms then amplify and often the general media pick up on. The result is millions of people believing a bunch of stuff that has only been generated to either try and make clicks for money or by governments seeking to destabilise their competitors - sound familiar? 

If you want to understand how this works please look here where it describes way back in 20/15/16 how much simpler social media a machine learning tools were used by the International Research Agency which is the kremlin's propaganda arm. 

I like people reading this because this is work that was being done 10 years ago and I hope it therefore gives people an understanding of how much more advanced computational propaganda now is through the use of AI. It also explains what the disinformation is aiming to do. Which is basically make electorates lose faith in their government and institutions by spreading so many contradictory conspiracy theories that the electorate don’t know what to believe and subsequently lose faith in everything their Government purports to represent – this is how you destabilise countries and allies. You make them fearful, mistrusting, inward looking and you make the fight each other. 

And here is a recent simple article on a network of thousands of fake and misleading accounts based in China where the users posed as Americans and sought to spread polarising content about US politics and US-China relations. 

https://www.bbc.co.uk/news/technology-67560513 

 

This is where the real danger currently lies and this is what we need to address. And this is what a lot of the regulation is attempting to address. 

 

6) Lets regulate: 

Different countries are seeking to implement different AI regulation - below is a quick overview of how I see the different approaches. 

I think this is important to understand because there is again a lot of this information suggesting that regulation is only there to try and entrench the elites power structures. What I can promise you is this if AI is not regulated from an information dissemination point of view,  it will become impossible to determine what has been generated by humans, what is real and what is not real. 

Unless we can identify the difference between real authors and AI character & story generation, the agenda and the opinions of humans, if those who generate fake AI stories have no fear of being held responsible for what they create, then it is guaranteed that the conspiracy agendas will be amplified.

Human rationality, commentary and discussion about the best way forward for societies stands no chance against machine generated, algorithmically pushed disinformation. 

Below are the key areas of AI regulation – and remember a lot of these regs need to consider regulating against use cases and things that haven't happened yet – that’s a tough place for regulation, but its great we are getting on top of this. 

 

EU – Regulate first – ask questions later (probably sensible given the ramifications). 

 

The AI act is the first global AU regulation act with sweeping powers to Mitigate AI harm. 

There is a good overview of this in MITs Technology review. 

They say. 

'The AI Act was conceived as a landmark bill that would mitigate harm in areas where using AI poses the biggest risk to fundamental rights, such as health care, education, border surveillance, and public services, as well as banning uses that pose an “unacceptable risk.’ 

  

https://www.technologyreview.com/2023/12/11/1084942/five-things-you-need-to-know-about-the-eus-new-ai-act/ 

 

Good work the EU. 

 

UK – chat first, regulate by consensus later – how lovely! 

 

The UK AI summit held at Bletchley Park – the home of the first computer and where Alan Turing broke the Enigma code tha helped win WW11.  Its also where my Grandfather, Bill Spencer worked during the war – I seem to be following in his footsteps, although we don’t really know what he did in 'Hut 6' at Bletchely, everybody took the official secrets act very seriously back then. 

 

This had 28 Governments attend and was about managing big risk in big ways – it was a Meta meet about extreme risk and safety. 

It was quite something that the communique got 28 countries, including China and the US on the same stage, with International safety institutes formed to test 'frontier' AI models for safety and security. 

This culminated in the Bletchley declaration, a commitment to apply safety process, and 2 more safety summits. This is excellent (if non-binding) work and it needed to be done – well done Britain! 

At the same time the UK AI Gov industry focus is very much about ensuring small tch companies are not crowded out by big tech players, beefing up anti-trust divisions and supporting start-ups – I think that’s smart when you no longer have a massive cheque book. 

https://www.bloomberg.com/news/articles/2023-10-24/uk-is-set-to-reject-big-tech-call-for-antitrust-appeals-route?srnd=undefined&sref=3eAg5tE1 

 US – the BIG beast – 100 pages of Executive orders from Biden. 

I think this is also the right approach from the US. The aim is to control Big Tech, ensuring that their code is available for Government scrutiny, while leaving enough room for the U.S companies to do their thing.    

It basically says if you are a massive company then you have to play by a set of rules that the smaller players don’t yet have to adhere too. 

It’s a way of keeping things nimble but hopefully controlling dangerous behaviour before it hits scale. 

So you have to do things such as let the Gov know the controls you are using and to stress test the code, if you want to pitch for Gov contracts you have to obey certain processes, there are rules for critical infrastructure etc. 

After this there will be lots of little regulations to discuss and negotiate but it’s a lot of regulation and its early.  

Im happy that this is all going on, at this point in the internet and social media revolution we set pretty much zero regs. Most notably no privacy laws and look at the mess we are in now. Social media companies are allowed to amplify the most divisive, inaccurate, damaging content and they have absolutely no responsibility for it – apart for to collect the money they get from eyeballs.  They - and we - are incentivised to create rage. 

 

7) What we have right now and what works 

All the big marketing service companies are integrating AI into their systems, it will soon be buried, not highlighted, and instead the outputs of the service and what it does will be focussed don – not the 'AI' bit. 

But, from a marketing perspective, at its simplest, you should be looking at the following:

 

1) Efficiency products: 

Think all of Microsoft's 'Agents or 'Pilots'. Summarising notes or meetings you missed, finding information or helping you write a deck. They are basically GPT powered chatbot versions of the Avatars we have been creating at The Virtual Influencer Agency, powered by Gen AI and your internal data sources - we love them and have been seeking to apply them - although they are very new in this form. 

https://open.substack.com/pub/synthedia/p/microsoft-copilots-everywhere-a-bing?r=4hby2&utm_medium=ios&utm_campaign=post 

2) Research based product; Helping to make sense of and understand  Language /  Emotion / Topics - using machine learning to find patterns and predict behaviours - what we use for our Data Kinetics products - Live & Breathe | Data Kineticsx (liveandbreathe.com)

3) Creative products: Auto / multiple Text / Image / Video generation – see Mid Journey / Adobe / DAll-E (within chat GPT-now) these are being used by enterprising companies who are creating dashboards and UIs that enable you to create multiple versions of ads, powerpoints, etc -

If you want to know specific companies brands / doing the above then DM me and I will try and guide you. 

UP NEXT - The above really is the basics, I will do another newsletter soon about what actually works, the limitations and where this is all going - and what as a marketer you should you do next.

 I hope thats been useful.

Bye for now!