Are we in for an AI winter?
It’s no secret that since ChatGPT went live, generative AI – and the topic of AI as a whole – has gone viral for all the best and worst reasons.
Whether it’s the Google deepmind scientists and biochemist winning a Nobel prize for their AI breakthrough in protein structures, or Chevy suffering an embarrassing blow at the hands of their online chat bot, the excitement, opportunity, hope, innovation and, frankly, fear that has sprung up has been front page news constantly. I’m particularly excited as, whilst I think generative AI has a long way to go before its truly applicable to a broad range of scenarios, the acceleration we’re feeling within this space has been face-rippling, and it’s clear we’re on the cusp of the next technological revolution.
Whilst all this innovation is very exciting, I do need to be a bit of a buzz kill for a moment as there are many historical lessons that need to be learnt.
Let’s take a step back for a moment and revisit the data boom. Despite all the exciting innovations that took place during this time, we seem to find ourselves not correctly managing, and therefore not trusting, our data. The challenges we see today in this space aren’t particularly new, and we have a lot of best practices to tackle these problems, yet I’m hearing over and over that companies do not have the capability, investment, or motivation to tackle these problems head-on.
This is interestingly in parallel with a huge, inflated demand around AI, as AI will only be as good as the data it’s provided. We know businesses want AI, but how do you keep up? Many businesses are consuming it in small pockets with a small footprint. For example, you may have a few extensions in your software stack, or you have a couple of out-the-box AI solutions – so why do you need to worry today?
Let’s explore a few of the common challenges we see within data, and how it relates to AI:
- Speed and generation of data: The volume and velocity of data has increased massively. Whilst this means there are ‘more things to leverage’, that also means there are more things to manage, and whilst this provides us a whole host of opportunities when done correctly, businesses across the board are struggling with data management. This won’t change with AI. There will still be a multiplication of the size and adoption of AI applications, except this time, there is (currently) far less control and regulation.
- Regulation: Regulation has increased the overhead and pressure around data management, and there is a lot of AI focused legislation being discussed around the world. It seems every major global super power has their own approach – but if we just take the EU as an example, we can see some systems will be very heavily regulated, and in some cases, outright banned.
- Garbage in, garbage out: This phrase is not new, and has plagued data for a long time. The quality and accuracy of your data is very important for many tangible and intangible reasons – and if you want to use AI, this importance becomes even more clear. Not only is AI typically made easier to access, but we as humans have a tendency to humanise AI (and in some cases, this is by design), meaning we inherently trust it more, meaning errors are perceived more than just a stupid system hiccup. Now imagine a business-endorsed AI spewing garbage to someone who doesn’t necessarily know better.
- Demand: The demand we saw during the data boom for insights and analytics put a lot of pressure onto vendors, and were quickly adopted by those looking to gain a competitive edge. The AI boom is no different. We’re seeing a lot of demand around AI – and sometimes businesses don’t even know what they want it for, they just know they want it – and this can lead to some counterproductive, or poorly thought-out, behaviours such as releasing too early, resulting in poor performance or worse.
- Advancements in technology: The advancements during the data boom drastically lowered the barrier for collecting and storing data. Today, the advancements we’ve seen for AI have mostly been around GPUs and capabilities of the chips, as well as the creation of foundational models to support more and more use cases and lower the barrier to entry. This has, in itself, caused a huge global political storm as we’ve seen the US put a ban on semiconductor exports to China, with potential for further restrictions in order to limit the power and advancements of their chips.
My point around this is that there is a lot of talk and ambition around these innovations from the exciting angle, but as we’ve seen with previous technological booms, we cannot walk into this with reckless abandon. How AI is used, the governance and security of both your AI and the data being accessed or used for training, and many more elements, need to be considered when deciding to adopt and launch these applications.
Previous AI winters & the current AI spring
Many people don’t know this, but we’ve already had an AI boom. Between the 1950s and 2000s, the field experienced several hype cycles, followed by rounds of criticism and disappointment, ultimately leading to reduced funding and large collapses within the industry. There are a few notable periods within these timeframes:
- The golden years: Between 1956 and 1974, the programs being created astonished most people, with computers being able to speak English and solve mathematical problems. This period was marked by the development of micro-worlds and Lisp, the first programming language optimised for artificial intelligence, as well as the creation of the first chatbot ELIZA, and Shakey, the first robot to move on its own. The field gained a lot of attention from institutions and investors with many promises being made.
- The winter: Unfortunately, many of the exciting enhancements made during the golden years stagnated. A lot of this had to do with limited computing power, and the applications could only handle trivial versions of the problems they were trying to solve. In 1973, there was the Lighthill report which gave a very pessimistic prognosis on many core aspects of research within the field, stating that “in no part of the field have the discoveries made so far produced the major impact that was then promised”. This led to a huge decrease in funding within the UK. DARPA then cut back funding to academic AI research, and in 1987 there was a collapse of the Lisp machine market.
- 1990 and beyond: Even beyond the 90s and into the 2000s, the reputation of AI remained poor. Many businesses and investors were put off by terms like ‘voice recognition’ and ‘artificial intelligence’ as they were associated with systems that never lived up to their promises, and many researchers in response began to re-market their work through terms such as machine learning or informatics. However, AI was still being used as elements of larger systems, but the field is rarely credited. In fact, how aware were you of AI in things you used before 2022? AI has been in phones, for example, for much longer than that. Some grew frustrated at the myth of AI having failed, even though it was around us every second of the day.
- Renewed AI spring: And here we are, back to the modern age as we are now within the AI spring. The level of interest and funding is by every measure at its highest, including academic publications, investment, jobs, startups and more.
So, what does this mean for the industry and how might we learn from previous boom cycles?
A key lesson from previous hype cycles of both AI and other industries is the importance of adjusting and setting expectations around AI, and acknowledging that AI right now does have its limitations. It feels like the AI field is set up for disappointment as its competing with massively sensational headlines and over-inflated expectations. In fact, if we look at recent advancements, they’d be comically sci-fi if you released them 5 years ago, but now? In some cases, it can be nothing but a disappointment. Or, if expectations are hit, the immediate question is ‘ok, what next?’.
If we look back at the previous AI winter, a core limitation is the technology. Placing the lens on today, we do miss some key enablers such as hardware that is powerful enough to run this at scale (and sustainably), as well as big data. It took a lot of work to overcome these obstacles enough to get to where we are today, but they still aren’t solved for either, and typically hopes around this are still hinging on incomplete technologies such as quantum computing.
Whilst this current spring does have a different feel to it as we’re standing on fundamentally more capable shoulders of giants, we should not assume that the entire field is immune to another fall. There is already a palpable shift from where we were at the start of 2024, and public perception, investor relations, and commitments from institutions may become all too flaky if excessive hype is not counterbalanced by an acknowledgement of the challenges AI faces. And whilst hopeful optimism is needed to inspire, experts within the field must acknowledge their impact on public perception.
Therefore, the adoption of AI for businesses needs to be considered carefully and with cautious optimism. Whilst many of these expectations are largely inflated, there are lots of exciting advancements happening that you can still make use of today – but we need to be mindful of the future. AI ethics, and the handling of data, will be of paramount importance as we move into this unprecedented era. Many privacy concerns are already cropping up – look at the Harvard students for example, who implemented facial recognition into Meta glasses to essentially socially engineer strangers by doing a face match and pulling off information such as addresses, jobs and interests. I’m sure that over the next 5 to 10 years, we’ll see far more interest in technologies and frameworks like AI governance, privacy, and ethics as we see more legislation come down on businesses using AI.
If you’re interested in talking more about data & AI, reach out to one of our Data Experts today.
Related articles
Objenious: Faster data processing for the Internet of Things
Stop creating data strategies. Create strategies supported by data
Not perfect, but close: why a ‘good enough’ mindset for data might be holding your business back
Bet on Success: How to avoid costly downtime and keep Informix online all the time
The EU AI Act: quick guide