
Chat GPT reaches PhD levels of intelligence!
AI threatens to eliminate a hundred million jobs!
That’s right people, we’re going there. In this post, we’re biting the fourth-industrial-revolution bullet and talking AI. And frankly, it’s about time. Leaving academia can already feel like a huge decision. ‘What if I hate it? If I do, can I come back?’ And so on. It’s therefore no shocker that I’ve recently heard PhD researchers asking questions like:
- If I leave academia for X, how do I know X will still be a thing in 5 years?
- If AI supposedly has ‘PhD level intelligence,’ what use am I as a REAL PhD in today’s world of work?
To help get a handle of this, I reached out to Chris Webb, Registered Career Development Professional and Podcaster extraordinaire, who has much to say on the impact of AI on the world of work. I had the pleasure of co-writing a book chapter with Chris last year on using AI in career guidance for researchers, so was looking forward to hearing his two pence (maybe even a couple of quid’s) worth.
There’s no crystal ball…
First off, I wanted to address how feasible it is to predict whether the job you’re leaving academia is about to disappear. According to Chris, predictions may unwise.
Predicting the job market is like sticking your finger in the air. Projections are always changing, he explains. Check who’s making them, too: for example, of course Microsoft want you to believe that journalists and writers as most at risk, because they’re trying to sell a product that can do tasks undertaken by these professions!
We can say anecdotally that some sectors like freelance translation and graphic design have been hit… but the changes aren’t uniform across all employers. For example, Amazon has cut thousands of jobs in the US because their ethos is to lean into AI efficiency. But with other organisations, it’s slightly different; we see companies like HSBC for example recruiting for roles specifically focused on the cultural adoption of AI across the organisation.
Ultimately, Chris explains, businesses that have always looked to outsource work to cut costs will inevitably use AI to substitute those people. But, I’m confident that plenty other companies will look for more ethical adoption and how they use AI alongside people.
To see which fields and roles are on the upward and downward trajectories, therefore, Chris advises against obsessing over predictions, and instead recommends trying to identify trends:
Knowing what your country’s government is investing in is a good start. In the UK, for example, the Government’s industrial strategy outlines eight key priority sectors. Also, get a sense of the political consensus: for example in the UK, all main parties (except Reform) seem to agree that Net Zero and clean energy are big areas for investment, so you can get an idea where government money will be spent and what projects might result from that.
Chris adds that, since labour market data struggles to keep up with the pace of change, multiple sources are often needed to verify what’s happening with AI in the job market. For example, Lightcast’s Beyond the Buzz report analyses the AI competencies employers are currently asking for in job adverts, whilst reports from AI labs themselves (Open AI’s GDPVal report) can help us see how individuals and organisations are starting to adopt AI.
You don’t need to be an AI evangelist
Another refrain over the past 18 months seems to have been ‘your job won’t be taken by AI, it’ll be taken by a human with AI skills.’ Is that true? And if so, what AI skills?
A while ago, Chris reflects, being ‘data-driven’ and data informed was top of the agenda if you wanted to succeed in the modern workplace. Now, everything is data-driven! For example, if someone walked into a role today and said, ‘I’m not interested in engaging with data’ or ‘I won’t engage with EDI because I don’t believe in it,’ they would (we hope!) be laughed out of the office! Or at least find it very difficult to get a job. The same goes for AI.
In terms of using AI in a role, for example, an organisation may expect you to use their approved AI tool. Now for someone who’s never used an AI tool before, that could be very daunting. However, what I’m seeing is that many roles are looking for AI assisted analysis rather than using AI to do everything. This means employers will need YOU, plus all the SKILLS you have around research and data analysis, coupled with experience of trying out AI for those purposes and, most crucially, being able to critically evaluate the outputs and understand where AI helps and where it doesn’t. That’s what I think will distinguish people moving forward.
So, there you have it. Sounds obvious, perhaps, but a discerning AI user who also has a good understanding of their value (and how to position that) is, according to Chris’s insights, undoubtedly in a better position than someone who has no idea about AI.
Even when it comes to AI-related jobs, Chris continues, some are less focused on the tech, and more centred around enablement and training, supporting AI adoption across an organisation. These roles are more about building relationships and understanding organisational systems than they are about being an AI whizz.
Think critically about critical thinking
This next question is something I’ve taken been sceptical on so far. When it comes to the ‘soft skills’ most in demand in the near future, the top answer often seems to be ‘critical thinking.’ Hurrah: critical thinking should come top of the tree in a researcher’s skillset! But, as I explained to Chris… I just don’t feel I’m seeing this in action. Last week I had a conversation with a friend from my PhD days about how, in our experience, thinking critically – challenging mindsets like ‘this is how we do things round here’ – tends to attract suspicion in our workplaces rather than promotion. So, what does ‘critical thinking’ really mean in this context? And what are the implications if you’re a researcher looking to move beyond academia?
The whole AI skills discourse is woolly and vague, Chris laments. Saying ‘critical thinking is key’ is kind of meaningless. Instead, you have to break it down. In the context of AI, when you break it down, ‘critical thinking’ is a PROCESS.
Chrisusedthe example of working on a project for a local authority who are considering introducing a chat bot. On a project like this, he explains, what critical thinking REALLY means is… Do you ask the right questions about data bias and context? Can you consider the human experience of that AI tool? Can you evaluate the outputs it gives? That’s APPLIED critical thinking, and that’s what’s important.
I found this a useful distinction. As a humanities PhD, when I hear ‘critical thinking’ my brain often jumps to a Derridean deconstruction of the world around us. In the context of working with and alongside AI, however, the ‘critical thinking’ that we’re told employers value is more about holding generative AI accountable: ‘keeping the human in the loop.’
Chris explains this further, emphasising the importance of AI literacy and fluency. Having AI literacy means understanding how the tech works, what it does, and its implications for work and society. Being AI fluent means you can adapt to using AI responsibly and transparently within your work. AI literacy and fluency will be crucial going forward… and researchers are well placed for this.
Given that I really want to believe that skills gained from academic research can help us navigate this brave new world, I ask Chris to expand this last point. He sums up thus:
If you’ve done a PhD, you have learning agility… you’re a triple threat. Firstly, you’re an adaptive learner who can learn, unlearn, and relearn; vital, given the rapid pace at which AI tools and models are changing. You’re also comfortable working as part of a multi-disciplinary community: valuable, as AI has the potential to create a greater number of generalists, so having an awareness of how different specialisms and systems interact can help us figure out how to deploy AI effectively and crucially, and when to hit pause. Finally, you can critically evaluate outputs; as AI tech becomes more Agentic – we instruct and supervise it, but give it more autonomy to complete tasks and interact with other agents – the ability to apply domain knowledge and critically evaluate workflows, processes and outputs will become even more important as a widely applicable skill set within work, across a multitude of sectors.
Beware of ‘panic upskilling’
So far then, Chris’s insights suggest that abilities honed through academic research can give a useful ‘edge.’ However, there was one more issue deserving of our AI attention, around upskilling. I spoke to Matteo Tardelli (all round researcher careers good egg) about this recently and he used a phrase that perfectly encapsulated what I’ve seen many PhDs and ECRs struggle with recently: ‘panic upskilling.’ Feeling pressure to go and ‘learn AI’ because that’s ‘the big thing.’ Chris similarly warned against knee-jerk reactions.
The hype can definitely lead to people going out to learn AI without clear intention or context, he explains. Chris compared two Masters students studying applied AI. One’s rationale for spending thousands on the course was pretty much ‘AI is going to be big, isn’t it?!’ The other, however, a clinician, had grown tired of ineffective legacy tech in the NHS and was keen to see how applied AI technology could enhance both clinicians’ and patients’ experiences. That for me is a good example of intentional upskilling or intentional repositioning.
The conclusion? For me, it’s that there’s no one-size-fits-all set of AI skills. Different industries and roles are looking for different things, whilst the pace of AI adoption is uneven across different companies. Take a nuanced, segmented look. If you’re interested in a particular industry, learn how AI is being adopted there rather than assuming every sector is affected the same way. Also, connect AI with your interests and what you’re already good at, rather than heading off to Coursera (or committing to a whole new degree) to take a heap of AI courses for their own sake.
My lesson is to stop fixating on predictions which often only fuel anxiety. Build AI literacy and fluency as is relevant to you and your interests and goals, and upskill intentionally, not reactively.
What you’ve been good at before doesn’t change because of AI. What needs to change is how you position that now: how you show the value you can bring as you, PLUS your analytical and research skills, PLUS your ability to critically evaluate outputs and make the call on what’s a job for AI… and what isn’t.