Generative AI products GPT-4 and ChatGPT are clearly groundbreaking innovations. But can they generate transformative new ideas? New products, services, and businesses will certainly be built using AI — potentially with greater speed and efficiency than ever before. But don’t confuse the novelty of seemingly cogent outputs from language models with meaningful results — or valuable outcomes.
Impressive new prompt-based tools can generate a huge array of seemingly intelligible output in imagery and text form. But as OpenAI notes in their release documentation, they can also create unfounded fabrications in pursuit of their probabilistic goals. Importantly, they can’t identify problems. Nor can they see new opportunities. Still, they might inspire if you’re attuned to what makes people people; if you are skilled at instilling some humanity into the conversation.
For as long as I can remember, the conversation and debate about the importance of STEM for the future of the economy seeped into every corner of the professional landscape. As a designer, it was hard to avoid the tiresome edict that designers should learn to code. And as someone involved in the arts, I never understood the continued criticism and ridicule directed at those who would dare study a qualitative discipline, be it art or (god forbid) any of the humanities. So it is with some pleasing irony that the culmination of the decade-long tech boom and AI’s imminent sentience has shone a light on the importance of being able to read, understand, critique, and edit language, both visual and written.
As a result, it appears we’ve collectively spent a decade devaluing a really important set of skills and, in the process, inadvertently created a functional deficit right when we need it most. How convenient for our future AI overlords. But also, how great for those who stuck with the humanities when all signs of a future pointed elsewhere.
Like many technologies of the past, these tools look certain to automate a number of previously human-led roles, including areas once thought to be uniquely human such as the creation of visual art or music. And while associated jobs may be at risk, the truth is that GPT-4 and ChatGPT are both fundamentally combinatorial rearview mirrors; they can (very quickly!) produce a copy of, or at best a variation of, what has already been created. But the results are not in and of themselves novel. Pong has been around for a long time and has been rewritten in several different programming languages, all easily accessible via a quick search. GPT-4 can produce statistically unique outputs—sometimes—given good prompts. By its nature, it is not offering anything genuinely new to the world.
I don’t care that it’s not AGI, GPT-4 is an incredible and transformative technology.
I recreated the game of Pong in under 60 seconds.
It was my first try.Things will never be the same. #gpt4 pic.twitter.com/8YMUK0UQmd
— Pietro Schirano (@skirano) March 14, 2023
Yes, the library these models were trained on is more vast than anything that came before it (though OpenAI won’t say how much bigger than GTP-3.5). And yes, most human-led innovation is frequently a mash-up of two or more previously unrelated things (think of the ubiquitous Uber for X start-up statements or any new niche music genre). However, the reality is that these tools cannot, and may never, be able to assess the value of their statistically unique output as it relates to their impact on human lives.
At a minimum, these tools will provide clear productivity boosts when in the hands of expert coders, writers, and artists. Mostly they help with the getting started part. In a way, these tools manifest the Pareto Principle, allowing experts to truly focus on the final 20% that matters.
Last night I used GPT-4 to write code for 5 micro services for a new product.
A (very good) dev quoted £5k and 2 weeks.
GPT-4 delivered the same in 3 hours, for $0.11
Genuinely mind boggling
— Joe Perkins (@joeprkns) March 15, 2023
Much of today’s tech labor is implementing already-solved problems using widely available patterns. Take, for example, the (admittedly very impressive) hand-drawn website sketch to html demo from the GPT-4 launch:
gpt-4 can turn your napkin sketch into a web app, instantly.
we are deep into uncharted territory here.pic.twitter.com/V5HtYHgS6u
— Siqi Chen (@blader) March 14, 2023
The reason the sketch-to-code functionality works is because the sketch is intentionally limiting itself to recognizable elements, following conventional patterns. The biggest job risk will be to uncritical implementers who simply pattern-match to known solutions. This has value in that there is no sense in re-inventing the wheel. However, the demo shows how often the wheel is manually recreated (or copied and pasted) in the software development world. And GPT-4 illustrates how much of what goes into new products and services are, in fact, already solved problems. Consequently, the ability to direct and resolve the final 20% will matter even more.
For those who know what they want to achieve, understand the trade-offs involved, and can quickly evaluate the outputs, generative AI tools will be able to accelerate their workflows and processes, allowing them to focus more time on what they are uniquely suited to do: bringing net new ideas into the world.
Quality and impact matter to people. Generative AI doesn’t care. Success means different things to each. For an AI model, its the statistical likelihood of producing the right response based on all prior known responses. For creative people, there is the satisfaction of striking inspiration, and having the motivation, and desire to will something truly unseen before into existence. Be it the overconfidence of a founder or the preternatural vision of an artist, an AI system cannot truly have an innovative, consequential, and positive impact on the world without a human agent choosing to apply its outputs. In the rush to automate, we cannot forget that this is where the real transformative outcomes will originate and be realized.
There’s a long history of the arts exploring generative processes and tools. Without overly simplifying, my conclusion is that generative approaches are more about taste than skill. They rely heavily on evaluation as opposed to the craft of creation. They are also intentionally constrained by a particular goal.
As Brian Eno said while discussing his generative music app Scape, from 2012: “That’s in the nature of anything that you make as an artist — you’re trying to make a thing, not everything.” Generative AI tools attempt to provide access to everything. However, for these systems to create something impactful, they require taste, or discernment, both in intent (input) and in evaluation (output).
So while it is hard to debate the productivity potential inherent in GPT-4 and ChatGPT, the majority of the work to render the output meaningful will be evaluative and qualitative. It will require critique in pursuit of higher quality — and careful assessment of the risks and second-order effects of its deployment behind the scenes as something people interact with (knowingly, and in the future, unknowingly) in pursuit of their goals.
GPT-4 does drug discovery.
Give it a currently available drug and it can:
– Find compounds with similar properties
– Modify them to make sure they're not patented
– Purchase them from a supplier (even including sending an email with a purchase order) pic.twitter.com/sWB8HApfgP— Dan Shipper 📧 (@danshipper) March 14, 2023
For start-ups and established businesses alike, yes, they will need people who get excited and inspired by new technology and can technically use it. But more importantly, they will need to invest in people who can strategically interrogate these systems and evaluate and refine their outputs in order to solve real problems. In the right hands, the speed of experimentation goes up. But unlike the often brute force process of pivoting until something sticks, the inherent speed will necessitate testing for edge cases and unintended outcomes, as much as fit. That is a good thing in the hands of someone who knows what to look for, and can properly formulate a hypothesis about how the application of Generative AI will impact the people it interacts with. Not just whether people will use it.
GPT itself is not going to solve problems. Nor will it be able to identify the next opportunity for your business. However, it might free up the right people to start looking at the world in new and exciting ways. And for a STEM-leaning workforce filled with implementers and optimizers, helping more of them learn how to read nuance and ambiguity will help them identify the differences that make a difference — in text and image outputs — and the world at large. This will have its own learning curve. Maybe an underemployed humanities graduate is available to teach them how.
If businesses are wondering what transformational opportunities Generative AI might offer, they may want to take a step back and look at the people they currently employ. Find the ones who can identify and explain problems and those with the skill to experiment with and evaluate potential solutions. The most important job to emerge from the adoption of generative AI tools will be a more strategic role, and not necessarily one requiring an MBA. Rather, it will be a role for those who can discern and maximize long-term positive human impact, not the short-term humanity simulating automation that is the obvious low-hanging fruit of these tools. To fully realize the potential of the historical wealth of human knowledge and experience contained within the language models, we will need the humanities to tease it out.