A-Why

Teddy on skateboard in Times Square. Why? Dall-e.

The recent flurry of excitement about OpenAI in particular and AI in general is certainly reminiscent of the early days of other technologies like web, web 2, and mobile. I suppose it is therefore predictable that we will see a hype-cycle being followed. Hard to know exactly how close we are to the top now but we may still be a while a way from the peak:

It’s an exciting period but I think it’s time to think now about the pitfalls to avoid.

One pattern we see over and over again with disruptive technologies is what could be best characterised by the advice, ‘Fall in love with the problem not the solution’. (The origins of this idea are somewhat unclear but it has now become a book by Waze Alumni Uri Levine).

When we see novel capabilities which themselves are captivating, it can be all too easy to jump straight to a solution, insert AI in it and then get immediately into making it. Since the solution, by definition, includes the captivating new technology, it will be fun and exciting to build but will it address any actual problem?

What is it that people are finding hard to do? How many of them are there? How severe is the difficulty. And perhaps most importantly, what are they doing to solve the problem now? More often than not, your product is competing with doing nothing, not with someone else’s product.

A second catch for entrepreneurs and product people is going to be ‘the drunk, the keys and the lamp post’. For those who don’t know the story, it is here: https://en.wikipedia.org/wiki/Streetlight_effect. With OpenAI models (such as Chat-CPTx), there will be a lot of people trying to take the interesting obvious thing it does – chat to people – and turn it into a product. But, with the API doing so much of the heavy lifting, it is going to essential to properly differentiate your product. The large language model cannot itself be the differentiator of the product (since it is available to everyone). What is it that you are adding to the AI model to build something with genuine utility?

I shared this list earlier on LinkedIn of brands already emerging in the space. There will likely be a lot more. And some of them may even become daily utilities. If previous cycles are anything to go by however, many won’t. The difference will be – as always – whether you can find genuine problems to solve and genuinely solve them. OpenAI and other models may provide new ways to do just that, but we’ll have to work harder than normal to avoid the seduction of the magic of a new technology.

The third, and more difficult challenge with OpenAI is understanding the limit of what is possible. As we watch it write poems and songs, spit out websites and even write condolence messages better than many of us, it is easy to to conclude that it will, in time be able to do everything humans can.

But being able to do some things better than people does not mean these systems will be able – ever – to do everything better. It may not matter for the problem you’re trying to solve but we never thought computers would shortly be our overlords because they could generate Pi to more digits, or process the structure of DNA. So why do haikus and tax returns make the difference?

For those who are interested in this line of thought, there is a fascinating argument made by Chris Searle which is still widely talked about in philosophical circles called the ‘Chinese Room‘ argument. Searle imagines a room with a slots for inputs and a slot for outputs (both written on bits of paper). The box can answer queries written in Chinese. Responding also in Chinese. Who knows maybe it can even write casserole recipes or pass the Chinese equivalent of the bar exam.

But it turns out that the person who’s inside the room, who’s reading the questions and writing the answers doesn’t speak Chinese. They just have lots of books that they use to decode the questions and construct the answers.

Will they even learn Chinese? Searle’s argument is that they will not because all they are doing is manipulating symbols. This is a model for whether machines will ever really understand the answers that they are spewing out. Machine Learning may look super clever but it is synthesising not originating.

It’s been interesting that Chat CPT 4 can beat an (automated) Turing test (i.e. a test of whether it’s a machine) but it actually does it by manipulating a human.

Will it ever? I thought there was only one person to ask?

Can you solve a captcha?

As an AI language model, I don't have the ability to interact with websites or solve captchas directly. Captchas are typically designed to prevent automated bots from accessing certain parts of a website or performing certain actions, so it's unlikely that I would be able to bypass them even if I could interact with websites.


What about a Turing test

As an AI language model, I can participate in a Turing test, which is a test of a machine's ability to exhibit intelligent behavior that is indistinguishable from that of a human. The Turing test involves a human evaluator who engages in a natural language conversation with both a human and a machine (in this case, me) without knowing which is which. If the evaluator is unable to consistently distinguish between the human and the machine, the machine is said to have passed the Turing test.

While I have been trained to generate human-like responses and engage in natural language conversations, the effectiveness of my responses in a Turing test would depend on the specific task or domain of the conversation and the sophistication of the evaluator. Ultimately, the goal of the Turing test is to evaluate a machine's ability to exhibit human-like intelligence, which is a complex and ongoing research topic in the field of artificial intelligence.

1 thought on “A-Why”

Comments are closed.