Open AI’s first DevDay revealed new, AI wrapper startup-crushing features such as browsing the web, configurable AI agents, programmatic image generation, and a new text-to-speech AI.
The developer community has already started doing plenty of new and interesting things off the back of the DevDay:
Wearables and GPT-4 working together to narrate your everyday life (á la Stranger Than Fiction, above).
After LazyApply completed applications for 5,000 jobs, Joseph says he landed around 20 interviews, a hit rate of half a percent. Compared to the 20 interviews he landed when manually applying to 200 to 300 jobs, the success rate was dismal. But given the time Job GPT saved, Joseph felt it was worth the investment.
At Ashore, we started making our own AI-powered chat bot. Wondering if you can take your dog somewhere on the South Coast in January with a bath tub and two parking spaces? Our bot knows everything about every Ashore location at once, so it can tell you.
Custom GPTs and longer, up-to-date context via GPT-4 Turbo were easily the two most exciting announcements of developer day. For the average schlub like me with gainful employment and a ChatGPT subscription, this is great - the technology is more reliable, it’s more accurate, it hallucinates less when it doesn’t know something, and it is just overall less easy to spot AI generated content that has been trained on an existing corpus of human work.
Prognostication is hard, but if we look at AI funding news just this week, it tells us where investors are placing their bets as the field matures and expands beyond “ChatGPT for X”:
At the same time, there’s already controversy where the rubber hits the road:
OpenAI first published a preview of GPT-3.5 in December 2022, and released GPT-4 in March 2023, so we’re still just under a year into whatever “this” is. So if you don’t feel any changes in your industry yet, we are still so, so incredibly early.
The FT ran a piece last week (that I recommend reading in full) on the slow down of certain types of freelancing due to AI offering similar services.
US researchers found that within just a few months of the launch of ChatGPT, copywriters and graphic designers saw a steep decline in not only the number of jobs they received, but also the amount they could expect to earn from each job.
When I read this, I was a little surprised that AI has already had such a big effect in such a short space of time.
As someone who was employing the technology in their day-to-day job, ChatGPT-powered copywriting was actually quite difficult without a lot of fiddling and human intervention before the updates last week. A human could easily spot AI-generated content, which usually struggled to stick to a particular tone of voice and to be concise. More importantly, if you’re copywriting for a search engine, Google et al are already wise to the practice, which had made obviously ChatGPT-powered copywriting counter-productive. DALL-E generated images often struggled to produce an image without some of the figures phasing through walls, or to create a person without a Lovecraftian number of fingers.
But in just 5 months from the launch of ChatGPT, there was a measurable amount of people who were clearly happier to pay their ChatGPT subscription and fiddle than to engage an expert freelancer. With the new updates, that group will start to grow much, much larger, and the tasks the AI is suited to will also expand into other types of freelancing. I’d be a bit worried if I was a freelance data analyst, for example, given it’s now straightforward to generate graphs and analyse data in many forms using ChatGPT. On a time horizon of a year or five, any laptop worker should be interested, if not worried.
In the same FT article, an HBS study on how BCG are using AI-assisted consultants sounded positive (the author John Burn-Murdoch concludes that AI is the consultants’ “frenemy” due to their increased productivity), but also hinted at the same problem that copywriters and graphic designers are already suffering - if the AI is making each consultant 25% more efficient at tasks, does BCG need to employ quite so many human consultants?
There’s a normie answer for “what will happen to the white-collar workers” whenever anyone writes an article on this, which is pretty straightforward and mostly right, on a basic level.
That answer goes something like:
‘Yes, AI will kill some jobs, but it’s not a zero-sum game. Look at the industrial revolution and how that changed the nature of jobs for the better.
We will create better jobs with AI doing more of the manual heavy lifting when it comes to research, information discovery, etc.
If we correctly plan with government policies and education, as well as adapting our existing industries, on net we should have more employed humans, not less.
We’ll also have new professions such as prompt engineering, AI ethicists, and more.’
The only part of the industrial revolution argument which doesn’t necessarily follow is that the type of jobs and the nature of employment will necessarily improve.
If AI can do the “entry-level” best (and do so quicker, and more conveniently than a human) of many types of specialised work like generating content, the barrier for entry to those professions will be pretty steep, and it will be harder to feed oneself while you progress to the necessary level in that specialism to earn the stripes where humans will pick you over an AI.
This didn’t really matter when we automated the pre-Industrial Revolution jobs. My favourite example of this is ice cutters, a profession made completely obsolete by the invention of artificial refrigeration. Being “really good” at ice cutting meant back-breaking work - you never got promoted into a better or more strategic ice cutter role, you just could do it a little faster. Swapping that for the operation and maintenance of new machinery maybe involved a little less time outdoors, but on the other hand, involved a whole lot less frostbite too.
If now, in the AI revolution, we are replacing roles that have benefits to the person that does them beyond the work itself - for example, if we believe the creative expression of an illustrator is an innately good profession for a person in and of itself, even when they’re designing a beer advert, and getting progressively better at that skill over time is a fundamental good - then we have to decide that the replacement role for that is a better job in some way for that person. Is some kind of stewardship role over an artificial intelligence a more meaningful role?
I could give one of those university essay answers where you interrogate the meaning of each word (“what is meaningful work”, “what is a fundamental good”) but instinctively, my answer is that monitoring an AI doesn’t feel especially intellectually fulfilling.
Moving on from “if it should happen” to “what will probably happen”…
I recently subscribed to one of the most successful Substack writers in the UK, Emma Gannon, who wrote The Multi-Hyphen Method - essentially a handbook for diversifying your chosen hustle into many, and therefore living a more fulfilled and financially healthy life.
I also became a member of Generalist World - a community for white-collar professionals who are intentionally building a varied set of experiences and skills in their career.
At their core, both philosophies, I think, have a common thread: keeping a few different bows on your string, in support of being able to evolve intellectually and into a better overall “package” as a person, and, therefore, as a person in a given career.
We grow up as multi-modal generalists - when I was 16, I would bounce between a morning GCSE maths class (awful) into an afternoon GCSE History class (amazing) and be required to engage with the content of both, rather than saying it was “not my thing”. We didn’t identify as “baby historians” or “developing mathematicians” - for the sake of finishing our exams, we had to identify under the generalist label of “students” and were taught to not dismiss our ability to do something just because it didn’t come as naturally as something else.
Received wisdom for white collar/laptop workers jettisons this idea and encourages bright young things to pick a path and stick to it to be successful - to identify as the main thing their job categorises them as.
This used to be great advice, but is already outdated. If you are anything like me, you can’t move for all the LinkedIn announcements of ex-colleagues or ex-professional contacts deciding to become a “fractional” something, moving into a contracting role or junking it all in for a generalist role at a startup. Career flexibility is allowing people to create their own working patterns and design their own careers, without sticking to rigid systems, hierarchies and job descriptions.
I’ve picked up a variety of skills throughout my career so far that are now individually on the brink of being automated by one AI platform or another. At the top of the consideration funnel, if you want to write a cold email to someone to get them to consider buying your product (my main job when I started in my first real corporate job), there’s an AI platform for that. At the bottom of the sales funnel, Salesforce (public enemy no.1 of any non-management track technology sales professional) now claims to enable closer teams to “sell faster and smarter” using generative AI. I recently started learning how to video edit, and we already saw above that there’s an AI startup with $1 million+ in funding trying to ensure humans don’t do that anymore.
In a specialist world, this is scary! Why employ me for x thousands per year to send sales emails, if with a bit of tweaking, a custom GPT can do those things for you?
But in a generalist career world, the sum total of an individual’s experiences will be more than these individual building blocks that make them up.
In recruiting, job descriptions may end up looking less job title-focused and being more descriptive of the immediate priorities that business needs. We’ll also probably “fail” people out of a recruiting process less often for specific skill areas, because the hard skills gaps can be filled in with in-house AI solutions.
For example, enjoying the prestige of being called a “Head of Sales” because you have the requisite number of years with a successful track record elsewhere to have earned the title is nice, but then having to work there long after the company has hired 100 more salespeople and moved up-market into a vertical you don’t have experience selling into (and then seeing people being hired in above you, as I’ve seen happen to friends and colleagues!) isn’t a good experience.
What is far better is “we need someone to oversee x, y, z which you have direct experience of, for at least the next 12 months, and if you smash it, we’ll give you some even more interesting stuff to oversee in the next phase of our growth”.
And if, as the CEO, I’m excited about that hire, I’m not going to worry about how much experience they might have using our favourite piece of technology, because we have AI solutions to fill in skills-based gaps.
These types of portfolio-based, outcome-orientated hires will allow people to fit more precisely into the exact type of organisation they excel in, and also give both employer and employee a bit more of a get-out clause when a professional relationship has reached its natural end - something that can feel a little awkward in growth companies which can look very different from one year to the next, particularly when you pour VC funding into it.
As AI reshapes the work landscape, the value of generalist skills and adaptability will become more and more pronounced. This evolution in careers should, hopefully, be more fulfilling and dynamic for the individual, even if it does present a challenge in the nature of meaningful work itself. It also seems far more possible in a world with AI than without it, because the repetitive rote tasks expected in any career will evaporate, so the detail and nuance of an individual’s career will be far more important as a consideration (rather than “bums on seats”). Generalist, hyphenated careers are a far more positive vision for the AI jobs revolution than many of the traditional answers.
I leave you with a final AI thought outside the career context, that I don’t want to over-explain, as it’s more interesting (and disturbing) when you really think it over.
I hadn’t really grasped one fundamental truth regarding the intelligence part of artificial intelligence until it was pointed out to me that many of the things that smaller or more primitive LLMs have struggled with have commonalities with human lucid dreaming (read more per this thread by hokiepoke and gfodor on X).
I think a lot of writing which focuses on “what it all means” is missing some of the scarier implications of what is really happening at the frontiers of artificial intelligence.