LLMs and the next User Interaction paradigm: from “What you see is what you get” to “Do what I mean”

I’ve started thinking to LLMs as a next UI paradigm.

Instead of focusing on the idea of creating an “Artificial General Intelligence”, which is akin to the idea of “autonomous driving”.

By now we understand that “autonomous driving” will not materialize, except for limited, well defined scenarios. The world is too complex.

As many persons think, we’re not going to have an artificial general intelligence (as they are salami); LLMs are going to assist humans, a sort of “digital interns“, in some limited, well defined scenarios.

But think to user Interaction: creating a formula in a spreadsheet is complicated, you require training. Think of image editing: have you ever learned to proficiently use Gimp or Photoshop ? or Freecad or Autocad ? can the average Joe use computer tools out of the box ?

Think to _any_ computer based task really and think of doing them with an LLM powered UI.

Suddenly you can perform any task with near zero training.

Instead of focusing on how to do things you just state what the outcome should be.

The “What you see is what you get” paradigm that underpins current UIs, is going to be replaced by the “Do what I mean” paradigm.

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *